I have been warned never to be alone with a girl, because apparently false rape accusations are rampant in my country.
I have also been assured that large percentages of girls are seriously sexually assaulted, and a significant percentage are raped.
All this fairly recently.
While I am sure these things happen, people around me seem convinced that these are far, far more common than anything that seems realistic to me. I meet people from a lot of different social groups, and most people I know I’d have a hard time believing would do any of these things.
If the “rape is rampant” side is to be believed, at least a fifth of women get seriously sexually assaulted. Who is doing all this assualting?! Are a tiny portion of men assualting a lot of women? Are a huge chunk of men assualting women? Why?
If the “false accusations” group is right, all the same questions!!
The big question for me is: why can’t being a decent person be enough to let you trust others anymore?
So now I’m wondering if I’m just oblivious or there has been something pushing people to trust each other less recently.
I’m probably not in your country, but I seem to recall statistics from 40 years ago in the USA that had more than 50% of women experiencing at least one sexual assault in their lifetime, including assaults that did not succeed.
Other than that – if you hear about it a lot, your subconscious decides it’s common. That’s why people in the US are currently afraid to allow their children out of their sight – some random stranger is, they think, going to try to kidnap or assault them. (The actual statistics there suggest this is actually quite rare.)
This is far too long ago for me to find sources, but it’s well before a lot of recent attitude and definition changes. I’d guess it included attempted rape, but not e.g. butt pinching, groping etc. And it certainly would not have included lack of enthusiastic consent; that standard wasn’t yet current.
It’s pretty easy to find statistics to determine how common these two crimes are. The prevalence of false reporting of rape is approximately 2-10%. However, it’s worth noting that not every such false report involves an accusation against an actual person; some are entirely phantasmal. A more realistic rate of false reporting, based on the above link, would be around 2.5%. There were about 90,000 rapes reported in the US in 2015, of which ~97.5% were substantiated. For a male population of ~160 million, that means your chances of being the subject of a false report of rape in any given year are about 1 in 71,111. Based on an estimate from the National Safety Council, you’re about as likely to die in a fire in any given year.
The demography of actual incidences of rape is a lot more complex, but the “1 in 5 women experience sexual assault in their lifetimes” figure comes from a CDC study. Based on a brief study of these results, I’d guess the majority of these cases are “intimate partner” assaults.
In 2015, there were about 90,000 rapes reported to law enforcement. Of those, less than two thousand resulted in a conviction. If you take the number of convictions, .0006% of the population is raped every year. If you presume every person who reported it was telling the truth and the prosecution just mucked it up (a very, very generous assumption), then the rate is less than .03%. (And false rape accusations are 0%.) If you assume only one out of three crimes is reported, you get a rate of .0009% (or 270,000).
Compounding the highest of those numbers and presuming each rape is to a unique person (which will make the number larger), 21 million people will be raped in a given (average) woman’s lifetime. This is a rate of 6.4%, or roughly one out of twenty. In contrast, if you believe that the justice system gets to determine whether a rape occurred, 156,000 rapes will occur over a given woman’s lifetime. Or a rate of .4%, or one out of two hundred and fifty. Note, this number includes men (and if you wish to expand beyond rape to all sexual assault, you get a larger proportion of male victims).
So the correct number is not ‘1 in 5 women will be raped’ but between ‘1 in 20 people’ to ‘1 in 250 people’. The way you get to ‘1 in 5’ is either by presuming less than one out of three crimes is reported or by counting all sex crimes as assault or both. (And then doing a bit of sophistry: men are victims of sex crime at pretty high rates but women are more likely to be raped. Yet somehow the number always becomes ‘1 in 5 women raped’ instead of ‘1 in 5 people are victims of sex crimes’. I’ve seen studies that count things like men getting groped as violence against women…)
You are correct that best evidence is that false rape statistics are somewhere in the 2-10% range. That doesn’t include cases the police dismiss as on their face non-credible or which are made to things like HR boards. Still, the rate is relatively low.
The question of which is more common is basically what you believe. If you believe that only convicted rapists are rapists then being falsely accused is as or more common than rape. (In fact, at the high end, it might be as much as five times more common.) If you believe the majority of rapes go unreported or unconvicted, then even at the high end (let’s say 20%: the high end of 10% and doubled for the ones that got dismissed etc) false rape accusations are significantly less likely than rape (though any individual accusation has about a one in five chance of being badly motivated in that scenario).
This is why it effectively serves as a wedge issue. Do you believe the criminal justice system works or doesn’t? If you do, it follows the low end of those numbers makes sense and false rape becomes more prevalent than rape. If you don’t, then false rape accusations might not even exist or are at least rare.
presuming less than one out of three crimes is reported This doesn’t actually seem unreasonable to me, especially if the majority of sexual assaults take place within pre-existing relationships. Most of those aren’t going to be reported.
The one out of three number comes from an actual estimate but there are lower or higher ones. I wouldn’t say the liberal position that rape is extremely highly underreported is unbelievable on its face. It is unfortunately pretty unfalsifiable (women don’t report rapes and lie on our surveys but they’re there!) so it leads to all sorts of wildly bad statistics.
I’m not using my estimates at all. I’m not a criminologist but a statistician.
The one in three number comes from the Bureau of Justice. You also have RAINN (who have an incentive to portray rape as common) who estimate it as 38.4% are reported. Both agree rape is relatively underreported: the Bureau of Justice estimates a little under half of non-sexual assaults are reported while RAINN says about two thirds are.
@Erusian
I thought your argument was that the one in three figure was unrealistically low. But now you seem to be saying it’s reasonably accurate. Or are assuming that the RAINN figure is wrong?
I’m not assuming anything. My simple point is once you work the math then your assumptions lead to two different conclusions, which means it serves as a wedge issue.
If you’re referring to me calling the CDC study serious flawed, you missed that the one in three number still doesn’t get you to the one in five statistics often quoted. You need to presume significantly less are reported. I’ve found no credible studies that make a strong case for the number they’d need to reach it, either from the Justice Department or RAINN, which is fairly damning of that particular statistic.
For a male population of ~160 million, that means your chances of being the subject of a false report of rape in any given year are about 1 in 71,111.
That only considers cases reported to the police, as Scott pointed out here. And do you not see the obvious problem with comparing the lifetime probability of one thing(being raped) to the per-year probability of another? Or with including everyone with a Y chromosome in the same pool of “individuals who could be falsely accused of rape?”
Based on an estimate from the National Safety Council, you’re about as likely to die in a fire in any given year.
As I pointed out your numbers are wrong, but even taking them at face value, people take precautions to avoid dying by fire.
Note that Marie’s rape was included in the “false reports of rape” statistics for the year in question, and that for that year in question Lynnwood had about 4* times the percent of “false reports of rape” than the country did as a whole – possible evidence that their police department was underly scrupulous in determining the facts about rape reports.
Edit: * – “In the five years from 2008 to 2012, the department determined that 10 of 47 rapes reported to Lynnwood police were unfounded — 21.3 percent. That’s five times the national average of 4.3 percent for agencies covering similar-sized populations during that same period.”
Yes, and sometimes the convictions are false too. That’s noise in the categorization system; if the signal-to-noise ratio is small, we’ve got big problems.
Sometimes those cases “proven to be false”, aren’t false:
And sometimes cases “proven to be true” aren’t true, what’s your point?
possible evidence that their police department was underly scrupulous in determining the facts about rape reports.
Edit: * – “In the five years from 2008 to 2012, the department determined that 10 of 47 rapes reported to Lynnwood police were unfounded That’s five times the national average of 4.3 percent for agencies covering similar-sized populations during that same period.
This is why we have tests for statistical significance.
No, that’s prevalence of accusations proven to be false “if there is a clear and credible admission [of falsehood] from the complainant, or strong evidential grounds.” You could just as easily say that “false accusations” are the majority of rapes if you consider every accused rapist not convicted of rape to be innocent. They also exclude cases of mistaken identity if the rape occurred but the wrong man was accused.
That 2-10% figure comes from a range of different studies with different criteria for “false reporting”. The most stringent requirements actually return false reporting levels well below 2%; the least stringent give results well over 10%. I feel that the lower end of the range gives the most reasonable answer, as not every case of false reporting involves an accusation against an actual person.
That only considers cases reported to the police, as Scott pointed out here. And do you not see the obvious problem with comparing the lifetime probability of one thing(being raped) to the per-year probability of another? Or with including everyone with a Y chromosome in the same pool of “individuals who could be falsely accused of rape?”
I actually did use the per-year probability of dying in a fire (1 in 118,051). Please read my sources before accusing me of misusing them.
As I pointed out your numbers are wrong, but even taking them at face value, people take precautions to avoid dying by fire.
And even with those precautions, they still have as great a probability of dying in a fire as being falsely accused of rape without any precautions.
That 2-10% figure comes from a range of different studies with different criteria for “false reporting”.
From the methodology section of your own link:
The determination that a report of sexual assault is false can be made only if the
evidence establishes that no crime was committed or attempted. This determination
can be made only after a thorough investigation. This should not be confused with
an investigation that fails to prove a sexual assault occurred. In that case the investigation would be labeled unsubstantiated. The determination that a report is false
must be supported by evidence that the assault did not happen. (IACP, 2005b, pp.
12-13; italics in original)
This is the _proven_ false allegation rate. The meta-analysis notes studies reporting numbers “from 1.5% to 90%”, and references a study with a 64% “unfounded” rate. This study they concluded showed a 10.3% “false” report rate, by excluding ” cases in which the police decided that the victim was an unsuitable witness, in which the police could not or did not produce corroborating evidence, in which the victim stopped cooperating with the investigation, or in which the investigating officers seemed to be prejudiced against the victim.”
So yes, they used studies with different criteria. But they didn’t use those studies’ conclusions; rather, they re-analyzed the data according to their own criteria (if they could).
The “2-10%” number is rape accusations demonstrated to be false. Most accusations result neither in a conviction nor a demonstration that the accusation was false. And there’s also not-insignificant categories like “the specifics of the accusation did not constitute a crime”.
The prevalence of false reporting of rape is approximately 2-10%.
My daughter was asked to homecoming by a boy from a neighboring school that she knows only through inter-school activities. After accepting, she was warned off by no fewer than 3 friends, because the boy has a reputation for sexually assaulting / roofie-ing girls. So she decided not to attend homecoming with this boy, and told him so. Then she had two more friends come forward and affirm her decision – it was a ‘good call’, she was told. No one, however, told her who he had assaulted – only vague ‘heard it from a friend who heard it from a friend’ rumors. I don’t know whether this boy deserves the cloud that follows him or not. Perhaps he’s being bullied by another party or parties who have decided to make him a social outcast.
In any case, it seems likely that Purplehermann is worried (and maybe should be worried) about this sort of accusation as well. It’s much less risky for a false accuser than going to authorities, and perhaps more common than the the type of accusations covered by the study you link to.
While I am sure these things happen, people around me seem convinced that these are far, far more common than anything that seems realistic to me. I meet people from a lot of different social groups, and most people I know I’d have a hard time believing would do any of these things.
“He seemed so normal.”
The big question for me is: why can’t being a decent person be enough to let you trust others anymore?
Being a decent person means that other people can trust you, not vice versa.
So now I’m wondering if I’m just oblivious or there has been something pushing people to trust each other less recently.
I mean, maybe. Certainly the story on a lot of police, or other abuse type things is that what’s changed isn’t the frequency (or if it has, it may even have gotten less common) but the ability to get proof, so the folks who otherwise would have said, ‘nah, lying criminals/sluts/children’ instead end up seeing themselves as betrayed by neighbors/authorities/friends, which does undercut trust.
If you were wrong about John, your coworker, who really was raping his daughter (all generic you’s, obviously), as the DNA evidence proved, then can you really trust your judgment about Jane, your neighbor?
“He seemed so normal.” I don’t think I’m falling for this failure mode, I don’t know anyone who is a (proven or actually accused) rapist, I have difficulty believing there are large quantities of women in my country doing something as horrible as falsely accusing men of rape, and only know one girl (maybe two) who has accused falsely-socially- and this particular person did not surprise me, but filing falsely would have.
Supposedly people can’t tell when others are depressed or even suicidal, but I have a good track record for noticing something is wrong. This makes me suspicious that i would totally miss any signs that rape or false accusations are as prevalent as is apparently believed
I’m sorry, you know no one who has been accused or convicted of rape and one, maybe two people who have made (you somehow know) false rape allegations, but you’re still taking seriously whoever is telling you to never be alone with a woman out of fear of a false accusation? Have you considered that this person may just be paranoid? Alternatively, given that you weren’t surprised, maybe just don’t be alone with people who you think would make such accusations, as you’re apparently very good at picking them up.
You may be better at noticing such things than I am, but alternatively, people may just not want to discuss such matters with you, especially if you’re friends with the person they would accuse, as is usually the case in social circles.
Or, maybe your social circles really aren’t infested with rapists, or false accusers. Not everywhere has to be, for it to be a major problem at a national level.
This distrust of people has been showing up recently from multiple people who don’t know each other. I am worried that the social fabric (for want of a better term) is deteriorating for some reason, and about lots of people being horrible more than my own well being, the warning was brought to show that some people are really worried about this and think it’s common enough to be a serious risk.
Eh, for decreasing social trust, I have no useful insights, especially for a country I’ve never been. I think there has been a general increase in cynicism through web culture more generally, which I think has negative impacts, but again, I’m only familiar with the areas I interact in.
This distrust of people has been showing up recently from multiple people who don’t know each other.
But do they all know someone else who has been stirring stuff up? Directly or indirectly. Stirring up drama is pretty easy, and there’s no shortage of people who do it. For instance, consider this recent case in the US. One girl posts a notice in a girl’s bathroom that there’s a rapist in the school “AND YOU KNOW WHO IT IS”. Other girls post similar notes based on the first note. It’s assumed by various other students that one particular male student is the rapist. But it’s all meaningless drama; the original poster denies she meant the male student targeted, and the copycats had no idea.
“He seemed so normal.” I don’t think I’m falling for this failure mode, I don’t know anyone who is a (proven or actually accused) rapist, I have difficulty believing there are large quantities of women in my country doing something as horrible as falsely accusing men of rape, and only know one girl (maybe two) who has accused falsely-socially- and this particular person did not surprise me, but filing falsely would have.
Most rapes are committed by the ~2% of the male population who is sociopatic. I suppose it’s the same for false rape accusation, except that most of the perpetrators are women.
“I have been warned never to be alone with a girl, because apparently false rape accusations are rampant in my country.”
I don’t think the modal fear with this advice is false accusations of rape. Usually it’s “sexual misconduct.”(The scare quotes are because of the phrase’s vagueness.) I think most of us have witnessed co-workers who do not get along with one another. This problem has always existed and isn’t going to be magically fixed anytime soon. “Just be nice to people,” is always good advice, but you have to consider that sometimes people are not going to be nice back, and if certain variables align in a certain way you could be at a substantial disadvantage with no one, no lobby, no organization to have your back.
Who is doing all this assualting?! Are a tiny portion of men assualting a lot of women?
Most likely.
Crime in general is not uniformly distributed in population. Some people are way more violent than others. Some people are way more mentally disturbed than others. Some people have low self-control. Some people are psychopaths. Some people are in positions where it is easy for them to abuse others, either because they have some kind of protection, or because they have an access to many vulnerable potential victims.
I can’t speak with any confidence about the fraction of rapists in population, or about the fraction of rape victims, but it wouldn’t surprise me at all to learn that e.g. 1% or 2% or 5% of men would rape 10% or 20% or 50% of women.
(Ignoring all other forms of rape, to keep this debate simple.)
This topic is politically sensitive, because… let’s say that explaining things by “there are differences between people, so they act differently, duh” is frowned upon these days. And of course, the situation is more complicated: the boundary between e.g. violent and peaceful men is not sharp; people behave differently on different days because their situation or mood has changed; sometimes unusual situations happen; etc. So there are also rapes where the man is an otherwise decent guy, who did something “out of character” because his emotions got momentarily stronger than his self-control. And the whole spectrum in between.
So I could imagine the right answer to be something like “15% women raped by 1% of men (serial rapists), and 5% women raped by 5% of men (a date gone wrong)”. There are also repeated victims among women, etc.
Rape is very much a case of small number of perpetrators, large numbers of victims. DNA work on backlogs of rape kits turned up horrifiying numbers of repeat hits, interviews of convicted rapists, and anonymous surveys of large numbers of men.. – it is all the same story. North of 80 percent of all sexual assaults are the work of serial offenders who keep on victimizing people until they get too old to continue doing so or they finally wind up behind bars. The remainder are mostly cases of “everyone involved too drunk to consent to anything”, and a small smattering of one time offenders who found the actual experience did not match the fantasy and consequencely stopped. One time offenders are, to a first approximation, never convicted.
Going off cases of people who actually spent time behind bars for rapes they did not commit, there is one
and only really one way for this to happen to you.
Step one: Look suspicious. By which i mostly mean. “minority”
Step two: Be in the vicinity of a particularily gruesome rape.
Step three: Have a shitty local police department and a bad public defender.
Note that there are multiple definitions of sexual assault, ranging from rape to sexual behavior without consent to sexual behavior with consent that the person later regretted.
There are also multiple definitions of false accusations, ranging from malicious accusations intended to harm the accused to defensive lies to protect the accuser from blame* to non-true accusations.
* For example, a cheater can accuse the person they had sex with, to save their own relationship.
This is an appeal for help. I’m a huge fan of SSC and this community, so I figured this was the place to come for what I need.
I’m about to start interviewing candidates for two open headcount in a team, that I’ve recently become manager of. Most of the candidates have some relevant experience etc. but I’d like to recruit based on general intelligence.
I have some constraints:
I work in a mid-level role in a large company which means that I can’t start unilaterally handing out written tests, I have to rely on a 30-45 minute interview.
Any candidates I favor have to be interviewed by my superiors, and at a minimum any “weirdness” in the interview with me is likely to be fed back to my superiors then, so I’d like to avoid that.
Several candidates are coming from recruitment agents, and will likely be encouraged, post-interview, to share the questions I asked, so that they can be provided to other candidates, represented by the same agent. So I need to ask different questions to different candidates.
And finally I am a native English speaker, and I will conduct the interview in English, but none of the candidates are native English speakers, so i don’t want to accidentally test English proficiency instead of intelligence.
Given these constraints, what’s a good selection of different questions I can ask in an interview to determine who is unusually intelligent?
I probably won’t have much useful to say, or the time to formulate it if I did, but it might help others if you answer the following questions:
– Can you elaborate on what “general intelligence” means here? Do you have anything more specific that you wish to select for?
– You say you can’t administer written exams, but are you able to ask technical questions that involve the candidate working through analysis, code, or whatever is required in your particular domain?
As an aside: I personally wouldn’t discount the value of testing a candidate’s proficiency in English. In my limited experience, even very technical jobs in North America benefit from someone with strong language skills. That said, please excuse any typos above.
Generally speaking, you should be asking these people intellectually demanding questions within the domain they will be working in. If they are not expected to have any specific skills, I would try asking them to copy-edit mangled text or do math problems. Both of those should correlate well with general intelligence.
I’d be wary of copy-editing text; for non-native speakers, that could be at least as much a test of English proficiency. Some very intelligent Indian coworkers of mine have to make frequent use of the spellchecker and grammar checker.
Math problems or (for programmers) programming problems are much better.
Twice exceptional people can be truly exceptional in their areas of exceptionality.
Do these job openings need a true generalist, or, like nearly every job today, are they specialized?
Show me your evidence that general intelligence is preferable to higher specific-skill intelligence.
And also show me your evidence that a person with higher general intelligence, but a specific deficit in an intelligence important for a particular job is better than a person with a lower general intelligence, but specific strengths that align with the strengths necessary for the job.
I’d start with Joel Spolsky’s “Smart and Gets Things Done” essay.link text; it gives an approach, not specific questions, but I find it really helpful.
Second, I tend to test smart as knowledgeable about the relevant domain, and makes the less-obvious connections. So for example–I work with financial professionals. Basic competence–knows the products and systems they’ve worked with directly, and the basic toolset for the job. Impress me as particularly smart: know more about their products than they need to do their job–where are the key risks, how do we attempt to manage them, why are they important parts of our product suite, etc. Know about the history of the product designs, and what drove changes over time. See the connection between their specific function and the company overall. How did they learn about things–did they look up industry research?
This approach makes the questions easy to customize to the candidate while remaining predictable and consistent, since I’m probing them on the roles they’ve had specifically.
What you’re asking here is pretty difficult. Rather than try and measure general intelligence directly you should measure proxies that are relevant to the job, even if they incorporate “works hard” and “studies hard” into your measure. Anyway here are some suggestions:
Ask what books they’ve read recently, pick one and ask for a summary and what they thought of the book. Same with TV shows or movies if that is more relevant.
What are the pros and cons of using X method vs. Y method (both methods that they should be familiar with)?
Are there good published papers showing useful techniques for determining who will be a better worker? My impression is that in general, interviews don’t do very well at selecting better candidates.
Technical questions in your field are worthwhile; making sure they know their stuff technically is something you can do. Trying to figure out how well they’ll fit with the office culture is also important. But I don’t know anything more specific to recommend.
there is research on the topic. The short answer is that no method of potential employee evaluation is good, but some methods are better than others, and this gels well with my personal experience.
What I do is go over the position I want filled and think about the qualities that I would want the people in it to have, the knowledge that need to do the job, and the day to day process of work. Then I write up a list of questions about those things. I mean an actual list, I keep score, and if they get a question right I just ask a harder one until I find the depth of their knowledge. The list is important, it helps make sure you don’t forget to ask questions you want to go over, it helps make sure you’re evaluating the candidates equally, and it helps with keeping notes. When I’m done I assess the various segments of the test on a fail, pass, high pass basis, then add in my assessment of the soft factors to come up with an answer. Usually someone emerges pretty clearly on top, but not always.
It’s not a perfect methods, some things are hard to measure for, but I’ve been using it for several years and gotten consistently excellent analysts out of it.
In traveling overseas, what are the biggest differences you’ve noticed with your home country? These are the main things I’ve noticed, as an American that has traveled an above average amount.
Europe:
I am always surprised by how much the local cuisine dominates the restaurant scene in European cities. In the US (especially in the West Coast, where I live), for every “American” restaurant, there are several “ethnic” restaurants, but in Germany German restaurants seem to outnumber any other kind, in France French restaurants dominate, in Italy Italian restaurants, and in Spain, Spanish. Perhaps there is some selection bias, but I tend to avoid tourist areas, so I don’t think that’s it.
I’m always amazed how good the highway drivers in Europe are, and how good the highways are. German highways tend to always have an extra lane over their American counterparts. It’s a very enjoyable experience.
That said, European city streets pale in comparison to American streets, and city drivers in Europe are awful in comparison to the US drivers. I suspect this difference is due to practice, since American’s drive far more, but that doesn’t explain why European highway drivers are better.
South America:
Obviously the differences are more pronounced here, but one thing I wasn’t expecting was how expensive some goods can be. Food is very cheap, but things I take for granted, like a pair of Levi’s, are cost prohibitive. In general though, I’m always amazed how cheap “stuff” is in the US.
Asia:
There’s a feeling of optimism that I don’t see elsewhere. The population knows that things are getting better, and that just really comes through. Also, it’s far more crowded, though I was expecting that.
The first thing I noticed is that the opposite of your general Europe conclusion about highway drivers. I found them abysmal. They were slow, did not have lane discipline, and the longhaul truck drivers are an embarrassment compared to ours, they seem completely untrained.
The second thing I noticed was that Greek restaurants in America are better than those in Greece, in general (there were some outstanding ones, but they were the exception).
Third, or maybe 2B, Athens is a shithole. First, there are people openly doing drugs and pooping/pissing on the streets everywhere, second you feel you are going to be murdered by a motorcyclist anytime you cross the street. Third the cabbies are rude. The only nice people I met in Athens were Canadians.
Fourth, outside Athens there are just a ton of abandoned buildings. Also visible graffiti everywhere in towns that in a similar American town would have none. These are places that are very pretty, outside of the buildings.
Fifth, and last thing that is interesting, the water is amazing. I’ve been to all sorts of American ocean areas, but none are as appealing to look at and swim in.
Overall I would not recommend Greece as a place to vacation. Were I to go back, it will because someone is paying me to go.
I haven’t driven in Athens, but I tend to agree with your assessment of it otherwise. It was a fairly disappointing place for me, since I had always wanted to go, but it’s in a sad state right now. They did have better coffee than any other European country I’ve been to though.
1) Yeah, when people say that the highways in Europe are good they definitely aren’t talking about Greece. Our roads are mostly pretty bad, with a few shining exceptions, like Attiki Odos in Athens. Also, there is a joke that the only zebra crossing that drivers respect in Athens is the one right outside the airport: This lulls incoming tourists into a false sense of security and they are subsequently run over by a car when casually walking on any other zebra crossing in the city 🙂
2) About the restaurants I guess it comes down to taste. In general greek restaurants outside Greece seem like a poor imitation of greek food to me and I avoid them.
3) Well, Athens is not very clean and you are absolutely right about the graffiti. Cab drivers are not as bad as they were say 10 or 15 years ago. As in most big cities there are certain neighborhoods that locals know to avoid but a tourist may walk blindly into. In general, the best plan is to stay in Athens for a couple of days, see the Acropolis and a couple other sights and then just go to a pretty island somewhere and enjoy your vacation.
People don’t litter in Japan. I went to a Ramen Festival in Chiba with maybe 20-30 stalls selling food, at least 5000 people there. Instead of garbage cans all over the place, there was only one central place to throw away/recycle the disposable bowls etc. And there was not one piece of litter on the ground: not even a napkin.
Also in Japan, I visited an elementary school for their annual physical fitness day and was pretty shocked at how run down it was, with rusted playground equipment and old buildings that looked worse than the school I went to in rural California 30 years ago. It looked like a distopian video game setting.
European highway drivers being better has a lot to do with the passing lane being treated as such: you are in it only when passing, and there is a very strong pressure against camping in it. I think city drivers being worse is inextricably linked to the fact that most inner cities in Europe have tiny, non-rectilinear roads – the problem is just much harder than 95% of American cities. (There are other contributing factors to both highway and city differences, obviously, but I think those are two important ones.)
The attitude to pleasure I think is a big difference – in British-descended cultures (so America, Canada, and I believe also Australia) there is a strong tendency to view pleasure in this weird suspicious way, like you’re somehow doing something wrong by doing things purely to make yourself happy in a short-term sense. I believe this results both in shitty food (compare all those cuisines to virtually any other in the entire world), and binge-drinking in a way that is very different from being drunk in other countries. And in pornography, for that matter, where most American porn focusses entirely on the man coming from penetrative sex or blowjobs, which is remarkably different from, say, Japanese porn where even the most awful molester videos or whatever will likely have an extensive oral and manual component. The idea of men having sex without spending a lot of time on their partner’s pleasure just doesn’t seem to be part of the equation.
I think mainland Chinese culture has a very strong tendency to not give a shit about other people who don’t directly benefit you, and a very powerful desire not to help others in need (cf. innumerable horrible videos of what happens after pedestrians get hit by cars). This is, from my understanding, a fairly understandable result of living under the kind of government they have.
I agree with a lot of what you said, but a few less-enthusiastic comments:
…but in Germany German restaurants seem to outnumber any other kind…
Might be true, but I had a hard time finding non-Turkish restaurants both times I visited Germany. I enjoyed that cuisine, but I found it a bit frustrating. In Austria, my wife planned ahead (as she is apt to do) using local recommendations online, and it worked out well.
The other thing that I didn’t enjoy about Europe was the large number of people smoking cigarettes. While I observed much less obesity compared to the US, I often wondered about the future impact of respiratory illnesses over there.
There’s a feeling of optimism that I don’t see elsewhere. The population knows that things are getting better, and that just really comes through.
I hear from older Chinese mainlanders that things indeed have improved greatly. Parents are able to fully feed themselves *and* their child (or two children, as the case may be now).
That said, it still feels like a rat race over there, where distrust of others is the norm, and there is a weird pride in bending or even breaking rules. This is probably a common example, but I’ve seen ambulances, with lights flashing and sirens wailing, have a tough time moving through traffic. I’m told this is largely because others assume the ambulance driver is simply trying to get ahead of them, and few cars will attempt to pull aside. Whether this has implications for the prosperity of Chinese society is something that interests me, but would probably result in more vaguery (at least from me)
[China] still feels like a rat race […], where distrust of others is the norm, and there is a weird pride in bending or even breaking rules.
Yup. It’s a ugly aspect of the society created, I think, by 30 years of a government that encouraged psychopathic behaviour as the norm followed by 40 years of just arbitrary corruption.
Interesting. I live in Europe, and that’s not my impression. I recently had vacations in Germany, in the Netherlands, and a short one in London. I like kebap meat, and had no trouble finding Turkish restaurants that sell kebap (and falafel) in any city. (Admittedly lamb kepap was harder to find than here.) At home too, we have Turkish, Chinese, Thai, Italian restaurants and more, as well as the ones selling local Hungarian cuisine. I haven’t been to America though, so I can’t compare.
I live in Germany and I can’t say I’ve noticed an abundance of German restaurants. But perhaps there are a lot of local cuisine places compared to the USA. I think they are certainly outnumbered by Turkish ones.
Compared to NZ, where I come from, there are far fewer Indian and Chinese restaurants, which of course makes sense when comparing the immigrant demographics.
I hate how cash-dominant Germany is. I always have to carry cash around with me, and it’s doubly annoying because they’re only just starting to make it possible to get cash out at retail stores and supermarkets.
Mobile reception is terrible in Germany, especially when compared to places like Cambodia or Thailand.
Tax in Germany is really high compared to NZ. I get to keep about 55% of my salary here, and then pay another 20% sales tax.
My Mind is my home, and they would like shared ownership. But this is no democracy, it is a dictatorship. And I cannot risk becoming enslaved. Enslaved by their will. I am the only one. Fit to be king.
The Nine Worthies were, in the medieval imagination, the greatest warriors of all time. Three were from the Anno Domini era: conventionally King Arthur, Charlemagne, and Godfrey of Broth. The six more ancient were divided into Pagan and Jewish triads: Hector, Alexander the Great, Julius Caesar, Joshua, David, and Judah Maccabee.
But should not warriors of the imagination be accompanied by wizards? Who would the Nine Wizards be?
Elijah should definitely be the third – called down fire from heaven in his contest with the prophets of Baal, and will be the one to herald the coming of the Messiah.
(Although we might be straying into “cleric” rather than “wizard” territory with prophets. Depends how strict your criteria are.)
(Although we might be straying into “cleric” rather than “wizard” territory with prophets. Depends how strict your criteria are.)
Yep, that’s the thing.
Solomon definitely counts as a wizard, because the priests/Kohenim were a caste that royalty didn’t belong to. (Islam muddles the “cleric?” question by counting Solomon as a prophet…)
Well by that criteria, does David count as a warrior, or a paladin (presuming he actually did anything other than take the glory from Elhanan)?
Even if David began as a paladin (is slaughtering Philistines for their foreskins really Good?), I’m pretty sure the thing with Uriah would have caused him to fall.
@metacelsus , indeed, David did seem to lose his powers after that. At least, he didn’t go out to war again, and his reign got a lot more troubled with plague and rebellions.
Chuck at SF Debris has a Lazarus of the Week award for episodes that feature characters being brought back from the dead. Tom Paris, hilariously, was Jesus of the Week in the episode Threshold, where accelerated evolution made his body kill him and bring him back to life.
No two of the Nine Worthies ever worked together; if the wizards are supposed to be a similar list, you would probably have to pick just one of the Three Magi to include.
Are we allowed to bring this into the 20th Century? Because I’ll nominate the Wizard of Oz and the Wizard of Menlo Park. Also that Oppenheimer guy, on account of successfully casting “Dispel City”. Twice.
The capital of Oz was nice – due entirely to the magical conditions of Oz; it was nice before he got there. The west was entirely in the thrall of a wicked witch using mind controlled monkeys, the east was similarly terrified of a witch. The south was nice, but that was because of Glinda, and the north was okay because of the unnamed witch but a wicked witch lived there and presumably made life unpleasant for her neighbors while she kept the rightful heir to the throne in slavery.
For additional Jewish wizards, I’ll throw in Israel ben Eliezer, the Baal Shem Tov (speaker of the divine word) , and Judah Loew ben Bezalel, creator of the Golem of Prague.
Christian triad: I’d go for Albertus Magnus, Paracelsus, and Georg Cantor. Depending on how strict you are about “Christian”, I might substitute Isaac Newton for Georg Cantor.
They weren’t the greatest warriors, they were the most chivalrous (otherwise the list wouldn’t include Hector and exclude Achilles). Though by that standard I question Caesar and Alexander. In any event, what criteria are to be used for evaluating the worthiness of wizards?
While I do like the idea of tracking students by ability, and having mixed age groups, there is the issue of bullies. In my experience, the kids who repeated a school year were almost always bullies, because they repeated the year due to lack of conscientiousness and caring about punishment (unless they were inmigrants; inmigrants were frequently made to repeat a year for not knowing the language well, which has nothing to do with conscientiousness or rule following).
How do you deal with a 12 year old who has the math and reading skills of a six year old? The kid won’t like being stuck with kids so young; there will be resentfullness, and anger, and the kid is much, much stronger. There will be a lot of temptation to abuse the younger kids, and most kids who are that bad at studying are less rule following.
How do you deal with a 12 year old who has the math and reading skills of a six year old?
In here we stick him with the other kids his age who can’t or won’t put in any effort and keep him away from the kids that do care. We do this less than we used to, because God forbid we spend money on anything but education for rich white people’s kids our best and brightest, but it’s a solution.
It doesn’t really cost more at all. Instead of 10 high schools you get into because your parents made a fuss, you have 5 for whomever, 3 mid-high ones, and 2 for the nerds.
Discipline, followed by expulsion if the child does not respond to discipline. Hopefully this is the same way you deal with any student who keeps bullying their fellow students.
Indeed, bullying should be solved by solving bullying. Not by banning all kinds of things that are kinda associated with bullying, only to find out that at the end we still have lot of bullying, but we don’t have many other things (such as the opportunity to study at your own speed).
The problem with expulsion is political. Not only in the “culture war” sense, but it in the general sense of “it makes some people really angry, and most people avoid making personal enemies”. Specifically, it makes angry the parents of the bullies, and also the parents who imagine their child could be the next expelled bully.
Bullies are a child analogy of violent criminals. Just like we don’t handle crime by waiting for a Superman to eliminate the criminal, we can’t handle bullying by waiting for a heroic teacher who intervenes (and if it turns out the child’s parents are e.g. lawyers and want to take revenge on the teacher, the school administration will likely throw the teacher under the bus to save their own asses). Expelling bullies should be “business as usual”, with clearly defined procedures and rules. We don’t have that. (Well, this is country-specific, so I don’t want to talk for the entire planet here.)
I suppose most people underestimate the seriousness of kid-on-kid violence. They will look at the bully and think “oh, it’s just a small child… the proper punishment would be scolding (because corporal punishment is inhuman and illegal) and maybe an extra homework or two”. Meanwhile the bully is every day stabbing the victim with a pencil, and one day the victim concludes that suicide is the only way to avoid that.
The usual argument is that if we start treating the young bully harshly, expel him from the school and mark him as a “black sheep”, we are creating a self-fulfilling prophecy. The poor young bully now has his carrer ruined, people will see him with distrust, of course he now has no other choice than to turn his back on the society. Now we have made a real criminal in the future, and we will have to deal with it. Therefore… we have to throw the victim(s) under the bus, to give this potential criminal many second chances to return to pro-social life. — I see a point in this argument, but my sympathies are still strongly on the side of victims.
There’s a nice quote from early Soviet teacher Anton Makarenko.
Pedagogic theories claiming that one can’t expel a bully out of the classroom, or thief out of commune (You should rehabilitate them, not expel) is rambling of bourgeoisie individualism used to dramas and “passions” of an individual, no seeing how entire collectives perish because of that, as if those collectives are not made out of individuals as well.
I’m not sure about “bourgeoisie individualism” but the rest rings true to me. A bully is a focal point of conflict and of authority’s decision while the rest of the class is an amorphous mass that does not inspire the same compassion.
I mean we could also do both. Expel the bully and try to rehabilitate them, by placing them in a special class/school that deals expressedly with bullies, as opposed to just expell them and tell the parents “well you figure by yourself what to do with your sociopathic kid now, good luck”. I know this is a marginal approach occasionally used in France, where very unruly kids can end up in special schools with classes of 5-6 pupils at most, with multiple educators per class.
Of course that requires investments and resources, which few people are willing to spend on bullies (let alone the kind of particularly violent and antisocial bullies who get sent to these classes).
I agree that this is what we would do in a perfect situation. In real life, the school will prefer the cheapest solution. Which currently means turning a blind eye; or giving the bully a stern talk and pretending that it solved the situation (from the schools’s perspective it does: the next day the bully will punish the victim for talking, and the victim will stop complaining officially).
A situation where kicking out the bully would be the cheapest solution, would be an improvement over what we have now.
No, the problem with expelling bullies is that bullies are popular and their victims are not. Humans see Bob mistreat Charlie and instinctively think well of Bob and are disgusted by Charlie.
There’s a popular perception/meme that bullies are acting out due to their own problems and weaknesses, but this is because people rewrite every cruel thing done by someone powerful as brave and just. Spectators simply cannot see bullying.
The correct response to being bullied is to leave: exit, not voice. Go elsewhere. If you are attacked by another person, and that person isn’t immediately and strongly punished, you have learned an important fact: you are an acceptable target in this society and everyone here hates you. You cannot make them like you; you can leave, and find somewhere else to be.
This is often difficult, as you’re a child and don’t have explicit choices of where to go. (EY I think said once–in HPMOR I think?–that adults see the problem with bullying as that it sometimes requires adults to take notice of things children do.) But if a child was being bullied, and asked me for advice, I’d tell him to fight back in the most violent way he can possibly imagine. Punch nuts, gouge eyes, fishhook, whatever it takes. He’ll lose, since bullies target the weak, but if he’s lucky, he’ll do enough damage to be expelled, and end up getting to be elsewhere. If not, maybe bullies will see him as weird enough to be not worth the trouble.
Jesus Christ, I was mostly with you until the last paragraph. Gouging eyes as a response to bullying!? Bullies are pretty terrible, but in all but the most extreme cases don’t deserve to have their eyes gouged out! Justice aside, blinding a classmate doesn’t get you expelled to a wonderful new school with new friendly classmates, it gets you expelled to the school where students who gouge people’s eyes go.
My emotions agree with you 100%. My intellect is not so sure, especially that last paragraph. Also, in my experience, a lot of bullies fold if you fight back at all – they aren’t the strongest or fastest, just the meanest.
IIRC, the traditional solution to bullying is to enroll the victim(s) in martial arts classes, in the expectation that this will assist them in winning the next fight.
But that doesn’t work so well when the school has a zero tolerance policy against volence, regardless of who started it, and/o has too few staff to see who started it and/or has teachers/administrators who prefer the bully.
My emotions agree with you 100%. My intellect is not so sure, especially that last paragraph. Also, in my experience, a lot of bullies fold if you fight back at all – they aren’t the strongest or fastest, just the meanest.
If they are rational then they will avoid needlessly attacking even a weaker opponent that can hurt them, even if they could ultimately win the fight.
But that doesn’t work so well when the school has a zero tolerance policy against volence, regardless of who started it, and/o has too few staff to see who started it and/or has teachers/administrators who prefer the bully.
So what? They aren’t going to expel anyone anyway, are they?
Though I think you go a little too far, I mostly agree: as was apparently later discovered with Columbine, people have a mistaken impression that bullies (in this case, the most extreme kind) are usually outcasts and misfits; after all, bullying is bad and we are good, therefore “we” would not do that, especially not the children of respected community members Alice and Bob. In my experience not every popular kid is a bully by any stretch, but most, if not all bullies are themselves somewhat popular. The slightly odd kid sitting by himself all the time couldn’t successfully bully even if he wanted to because he himself is too vulnerable a target.
What’s more, in my experience, not only is the bully often a popular kid whose parents are also respected members of the community, but their academic performance is not necessarily the worst. It’s usually not the best, but it’s not at all necessarily lagging way far behind. The bully isn’t stupid, posing another difficulty with any attempt to expel or quarantine him among other “problem” cases.
And I agree that verbally or physically fighting back is the way to go, assuming parents or administrators won’t do anything, though hopefully one begins with verbal insults and shoves before escalating to eye gouges and groin shots in the face of presumably severe, physical bullying. I was told for years to “just ignore them,” which I did, and which only succeeds in sending the message “there are no risks or consequences for bullying this person.”
EY I think said once–in HPMOR I think?–that adults see the problem with bullying as that it sometimes requires adults to take notice of things children do.
That is a reasonable thing for a 10 year old to think, but if EY himself thinks it as an adult, it shows a massive lack of emotional intelligence.
Just generally creating an atmosphere of trust would improve things. For every student to be sure that when they have a problem, there is an adult person who will listen to them. Preferably the adult person (a school psychologist?) would organize regular 1:1 talks with everyone.
Problem is, things like this are hard to systemize. No matter what algorithm and list of checkbox you propose, there is always a way to check all the boxes while making it obvious that the students should better keep their problems to themselves. So… I can imagine it to be done successfully, but only when people really want to… which I assume most would not.
Not making fun of kids, or even publicly humiliating them, for snitching would be a start. In my school, it was quite common for teachers to kick down the kids who reported on others.
Stopping every small behaviour you see. Not downplaying kids’ feelings; respecting them. Don’t force kids who have been in a fight to do group work together*. Let kids who don’t want to interact with other kids be free; don’t force social interactions; let the loner with the book read the book, don’t force them to play with others. Don’t force friendships upon people.
And frequently, teachers themselves are bullies; they designate a kid they hate, and make fun of them, giving the class permission to bully the kid.
I have had teachers who had the attitude EY describes; for them, they would rather not have to deal with it; they don’t care at all whether there is bullying going on, as long as there are no complaints (or suicides or other forms of trouble).
*That thing where you force kids to apologize to each other and shake each other’s hand? It’s abusive. You wouldn’t do it to an adult, wouldn’t you?
Neither of these comments supports EY’s (or his character’s) assertion that “adults” as a class are oblivious and prefer to be oblivious to things children do.
Many parents are; most cases of bullying can be stopped by parents who establish high trust relationship with kids, by showing them they have their back.
And parents are the adults who care the most about the kids. The rest are worse.
I had one parent who was very supportive, and the other was completely unapproachable. I was lucky. Many kids have no parent who listens.
Many parents are; most cases of bullying can be stopped by parents who establish high trust relationship with kids, by showing them they have their back.
Eh, authority cleaves to authority. The parents of the victim typically back up the school, not the kids, because that’s what’s expected of ‘good people’. Kids are notoriously unreliable, you know. (The parents of the bully do otherwise)
When I was in 5th grade, there was a poor kid in my class who was bullied a bit. One day during recess somebody on top of a jungle gym spit on my hat, somebody who was not exactly a bully, at least not to me, but he was much larger and tougher than me. Rather than blame him for spitting on my hat I blamed the poor kid, and I hit him in the arm or something.
Then the poor kid’s dad came storming over from the parking lot. The guy looked like a metallica roadie and he was obviously angry, and he really should not have been anywhere near the parking lot at that time of day so he was clearly spying on us all. Other kids warned me to run away, that he was known for hitting kids, but I stood there frozen, feeling guilty. He yelled at me a bit but that was about it.
The poor kid’s dad got in trouble with the school of course, and not for the first time. The poor kid was very embarrassed and apologized to me about it the next day but I brushed off his apology, and I thought to myself that I wouldn’t be hitting the kid again and I was in the wrong anyway, so probably a good thing his dad did.
Depending on situation, fighting back may have the advantage that the bully can no longer pretend that they were “just playing”.
Speaking from personal experience. I was briefly bullied by a classmate who did martial arts competitions. He often asked other people to punch him, just to show how he can deflect the attack or ignore the pain. I had no realistic way to hurt him by fighting back; he probably would have enjoyed some more interaction.
However, he also tried to keep an image of a good student. So he framed his bullying as “just playing”. Once I started fighting back, this wasn’t possible anymore. He could have easily beaten me to a pulp… but then he could not have defended it as “just playing”. So my desperate bet paid off.
Escalating in violence is a way to deal with bullies.
There are other ways of escalating that help deal with bullies, but those require your parents to be on your side. Unquestionably, with the resources and time they have, to be on your side. It’s mind boggling how rare that is, and it seems like it happens with mostly bullies’ parents.
Parents who will back you when you escalate legally mean a lot. Going to the police over theft of you property; suing for assault; creating lots and lots of trouble for the school, the teachers, the bullies’ parents, etc.
I’ve heard of a case where a girl was bullied; her things would dissapear and appear; sometimes broken. She once walked out of the school and went to the police station and reported the theft of her things, and she wasn’t bothered since.
But that also requires guts, and somebody on your side. The issue is, kids can usually only resort to an escalation in violence; they usually don’t have the tools or skills for legal escalation (writing letters to the ministry of education; making complaints to the education inspector, etc.).
Tracking is a goal for something like 95% of students. The ends of the Bell Curve are separated out into different schools. The Districts here have special schools for specific cases like the above. Those at the very high-end are half-integrated into the normal school system and half-integrated into local universities and their own special classes/teachers.
How do you deal with a 12 year old who has the math and reading skills of a six year old? The kid won’t like being stuck with kids so young; there will be resentfullness, and anger, and the kid is much, much stronger. There will be a lot of temptation to abuse the younger kids, and most kids who are that bad at studying are less rule following.
In general, outside of really poorly treated children, this isn’t particularly true. Little kids are generally not the target of much older bullies, it is slightly younger, or small for their age or odd in way X, Y or Z that gets kids bullied. Lots of bullying is done as status seeking behavior and it isn’t uncommon to find bullies who are protective of younger kids. Put a 10 year old in a class of 6 year olds and he is almost always the biggest, strongest and most capable in enough ways that the fact that he is lousy at math and spelling really doesn’t matter much.
There is a distinction here between bullies and kids who are unable to control their own emotions and who get in fights easily and lash out though, the latter is not who you want around small kids (or really anyone).
Using the General Social Survey, I discovered that 5.8% of male respondents born in the U.S. are virgins. But 7.2% of men born under the sign of Sagittarius are vigins, and 7.0% of Scorpios are virgins.
Have I discovered that there really is something to this astrology thing? I don’t think so.
Sagittarii are born between November 23rd and December 22nd. Many school systems have a grade cutoff of December 31st. Children born January 1st or after will be in a lower grade in those school systems. This means that Sagitarrii are likely to be the youngest children in their grade, and thus the least mature physically, emotionally, and intellectually. This puts them at a disadvantage relative to their older classmates and more likely to grow up lacking the social skills necessary to lose their virginity. This is the same effect as the findings about adolescent height and labor market outcomes.
You can game the system by “redshirting” your kids. Should you do it? Naively, your kids gain an “extra year” of “free” stuff, of course you will have to make that up. What they lose is a year in the labor market they will never get back. Similarly, putting them ahead will give an extra year in the labor market(though in some cases they would end up failing out and needing an extra year of education, most likely in college) but potentially at the cost of harming their social development. Society as a whole is harmed by redshirting and benefits from letting kids skip grades, so we ought to make it illegal to redshirt kids and illegal to hold kids back a grade, while encouraging bright kids to skip grades if they choose to do so.
It seems to me that if the goal of K-12 education is teaching material, the main problem with it and the main area where reforms could improve it is lack of incentives. I remember being in elementary school and they had this program where you read books, took a best based on the books, and could get a candy bar if you passed enough tests.(I don’t remember the exact details of the scheme.) As I remember it it was a pretty effective motivator and couldn’t have cost much to run. But for the most part I didn’t see incentives to try hard until high school when it did start to matter, though only for the college-bound students. For the non-college bound, there wasn’t an incentive to do anything but the bare minimum necessary to pass. “We’re never gonna use this, it doesn’t matter,” they said. They weren’t wrong.
Experiments have been tried with money and generally aren’t effective because the amount is usually insufficient to be a good incentive. If you want students to spend many hours more studying, 50$ at the end of the semester isn’t worth it. But you could set up an incentive system using the very thing you’re taking away from the students, their time. You could have tests of the subject matter every two weeks and give the kids who pass the next day off, send the kids who don’t back into class to review the material. For younger kids where parents want the free daycare aspect, you could send them to the playground or let them play around on the computer or read in the library, whatever they want to do. If you believe in the massive gains that can come from moving an economy to an incentive-based structure, I don’t see why you shouldn’t expect similar gains in education, exceeding the potential loss due to some kids getting less education-time. There’d be concerns some kids will pickle ree due to the unfairness of the system, but you rarely see the poor pickle ree-ing because of the unfairness of the economic system. If they are taught that this is how the world works, they’ll mostly accept it. There will be concerns students who already know the material will coast and not learn anything more, but you should expect that very thing to happen in our current system if a student is being taught material he already knows, he won’t magically absorb more advanced material.
I doubt there’s all that many people who, looking back upon their life, have said “Boy, I wish I’d started school earlier so I could have gotten more work in before I retired”. Doing better in the dating market is likely a heck of a lot more important. The problem with redshirting is that it’s zero-sum; some kid is always going to be the youngest, and the kid that is the youngest will _always_ be the youngest throughout their childhood and adolescence. If you want to solve the problem with the oldest in a cohort doing better than the youngest in general, you have to get rid of the cohorts by not batching kids by age in the first place. This seems like a difficult problem.
“I doubt there’s all that many people who, looking back upon their life, have said “Boy, I wish I’d started school earlier so I could have gotten more work in before I retired”. Doing better in the dating market is likely a heck of a lot more important.”
For men above a certain age the dating market correlates pretty strongly with the labor market.
“The problem with redshirting is that it’s zero-sum; some kid is always going to be the youngest, and the kid that is the youngest will _always_ be the youngest throughout their childhood and adolescence. ”
That part of it is zero-sum, but if you account for the lost year in the labor market due to our society’s stupid labor market structure, it’s negative sum.
For men above a certain age the dating market correlates pretty strongly with the labor market.
The best way not to be in the dating market at that age is being successful in the dating market when you’re younger. An extra year of earning earlier in life is not nearly going to make up for being out of step when your peers start dating.
(and yes, you get paid for that extra year. How much difference does it make in the long run?)
This is a variant of the classic deathbed regret cliche, which is based on people forgetting they were paid to be in the office.
A lot of that money goes to things they don’t actually derive much enjoyment from, like a bigger house, an extra car, or (more controversially) a house in a more prestigious school district.
There is also significant pressure in office jobs to work move and move upwards, compared to what people would choose on their own. This consists not only of financial incentives from your boss, but equally of social pressure from your coworkers (to whom you will appear as less competent, and a freeloader on their backs, if you don’t work as much as possible). As someone working in an office job now I can attest to this pressure.
In short, I’m pretty sure the deathbed regret is real.
When you get your first salary, it’s a huge change from “having to beg your parents for pocket money” to “having your own budget”. Depending on your family’s financial situation, a large part of that budget is spent on fun, at least during the first few months. Only later you start thinking more seriously and saving money. At that moment, the job is new and exciting, and you still have some naive expectations about how the sky is the limit. So in short term it feels fantastic.
On the other hand, a few decades later you already have seen dozen failed projects, spent thousands of hours at pointless meetings, and got the memo that you are not going to be the next Einstein. Most of your income goes to the boring regular expenses (mortgage, food, kids…). The only thing you really desire is to take a break; but still have a few decades to go. You are offered a 10% raise of salary in return for 50% more stress at work. There is a strong pressure on you to take it. So you take it. The money somehow disappears (okay, now you live a bit closer to the center of the city, finally have a new car, the whole family now buys slightly more expensive food and clothes, etc.), but the stress remains and damages your body.
The smart solution would be to start sooner and finish sooner. Either the “early retirement”; or have kids in your 20s when you are still full of energy, so that in your 40s the kids move out of home and get their own income, and you can now optimize for a job that doesn’t kill you.
The problem with these death-bed regrets is that, realistically, people will not spend their extra not-work time with friends and family. They will spend their time playing Candy Crush, and then blame their job for making them too exhausted.
Your health comes first, followed by strong human relationships, followed by money.
I can absolutely say that I wish I spent MORE time working when I was younger, because I spent a lot of time goofing off with bullcrap. I suppose I might say the same thing about now 5 years from now, but good God 12 current ADBG thinks 12 hours a day is enough.
realistically, people will not spend their extra not-work time with friends and family. They will spend their time playing Candy Crush
1) How do you know? 2) Even if true, it might be a good thing. Downtime after a hard work day is probably a good thing, stress- and health-wise, and very different from spending your entire day as “down time”.
I was bored to tears in nearly every class I had before college (exceptions were a couple math classes and a couple science classes). Starting me in school a year later would have added a year to my time utterly wasted in babysitting operations disguised as schools, for the benefit of being better in sports and more mature. Thanks, but no thanks.
You know, I always suspect there are people who *do* wish they’d spent more time in the office. Think of the scientist who was *almost* on the shortlist for a Nobel prize, but didn’t quite get there. Or the person whose business was *almost* the next Google, but didn’t quite succeed.
A lot of that money goes to things they don’t actually derive much enjoyment from, like a bigger house, an extra car, or (more controversially) a house in a more prestigious school district.
Sure, you don’t have to sell me on this perspective. But as far as the normies are concerned I think the social status that those things are associated with does make them happier. At least that’s what revealed preference says. And I think that if they were saw their work hours cut most wouldn’t be much happier, as they’d substitute leisure activities that are also just social-status signalling games and don’t inherently lead to happiness. For instance there’s a bar across the street which plays music very loud, not enough to bother me but enough that I can say that it must be hell for those inside the bar. Why do they put up with it? For a lot of them, I think it’s a signalling game, they want to show how young and virile and tough they are, as opposed to uncool people like me who can’t take it. A lot of high school was like that and my point is that you’d be trading time to participate in that one for time participating in the adult version. A big difference between these games is that the adult version has positive externalities: the consumer can buy more, better, cheaper stuff as a result of that rat-race. No one benefits from the high school popularity rat race.
The deathbed regret cliche reflects social desirability bias, it’s socially desirable to care more about family and leisure and art and travel as opposed to money. But this conflicts with the fact that it’s socially desirable to have money. So you do one thing, say another.
Doing better in the dating market in high school basically means starting with higher status during your formative years and will be correlated with doing better in all markets for the rest of your life. Social status, dating, career, longevity, health, mental health, everything.
This is worth way more than starting work a year earlier.
“Gap years” are a revealed preference for this which will have a much smaller effect than redshirting, and yet they do have a significant effect on college performance.
The same things that make you better off in the dating market will make you better off in the labor market, so I doubt loss of a working year is a trade-off.
It could well be that much of the height effect in the labor market comes from being taller than your peers growing up. Those are your formative years after all.
Does anyone know where I could buy a piece of furniture that is a combination bed/sofa? It would be a twin XL mattress, but with a padded “headboard” along one of the long ends of the frame, so it would also be a sort of couch.
Try “daybed” first. you can also google “day lounge” for other varieties.
This sleeper soda at Ikea is almost the dimensions you want.
It’s 1 inch shorter than a Twin XL, but also 11 inches narrower (without folding down the other half of the mattress that functions as the “headboard”).
For the kind of personality that wants to be remembered, the best job (for a typical person) would be an elementary school teacher. On average you’ll be remembered for the longest period of time by the greatest number of people.
A high school teacher definitely wins on the “remembered by the greatest number of people” part. Elementary school teachers have one class of about 25 kids, all day, all school year. High school teachers have totally different classes that size rotating through their classrooms 5 to 7 times a day, and then these change completely halfway through the school year.
Perhaps a high school PE teacher sees more students on average, but those students are also on average 7-8 years closer to death. So what’s the balance between length of time remembered and number of people remembering you?
Partly I remember my elementary school teachers better because I did have them for 180 days at 6 hours/day, and still saw most of them daily when I was in other grades. There are simply too many teachers in high school to see them everyday, and the interaction is sporadic enough that there are fewer opportunities for memories.
The average high school teacher teaches about 100 different students per semester though, so they only need to be remembered 12.5% as much as a grade school teacher by each student to have the same level of remembrance as them. Also, since high school teachers are around students that are going through more changes in life than grade school teachers, I think they are more likely to say or do something that affects a student’s life.
Epistemological status: My wife is a high school teacher that is often visited by former students that graduated more than 10 years ago. I’ve never known a person that kept in touch with a grade school teacher (unless it was already a family friend).
My mom would be approached by high-school aged and adult former students at times, who would address her by name, and astonishingly to me, would remember their names in return.
On balance, though, you may be right. Especially with things such as high school reunions that keep former students connected to the schools.
So a year or two back, I noticed spam calls started to get more sophisticated. The recordings seemed to be much higher quality, to the point where I could briefly be fooled into thinking it was a real person. They also did this Dora the Explorer type thing where they’d pause for you to respond. I was impressed the first time I heard it, though they would then go tell me about my credit card debt (I don’t have a credit card), vehicle warranty (I don’t own a vehicle), or claim to be from Visa/Mastercard account services (I wasn’t aware they had a merger), which kind of gave away the game.
Now the robocalls have gone back to sounding like a low quality recording or a text to speech program. Weirdly some of the text to speech ones still say “This is Heather from $COMPANY…”, as though that’s somehow still believable. It’s a strange thing to notice perhaps, but I can’t help but wonder why the change. Also now I get ones in Chinese. Any idea what the deal with that is?
[Epistemic status: somebody said it on the internets… oh wait.]
I read that first stages of many scams may deliberately be rather stupid and obvious precisely in order to filter people who can be actually fooled by this kind of scam. So that someone intelligent enough who’s sure to see through eventually drops away immediately and they don’t waste… whatever resources go into scam on trying to get them through the subsequent stages. Although it was said about “Ethiopian prince’s inheritance” kind of scam where first stage is really cheap but following stages are much more expensive by comparison, don’t know how it would work for scam calls.
Rollback of regulations. The US government regulates cold calling and those regulations have gotten more lax, leading to more people entering the market of lower quality. Additionally, one of those standards has to do with call quality etc so the rollback has seen the reversal.
My impression is that regulations have been tightening. One of the hurdles, however, is that it’s currently (or maybe only recently, I had heard that this regulation was going to be fixed soon) illegal for your phone provider to reject likely spam / scam calls for you, even if you request it. Furthermore, scam overseas call centers are able to ‘spoof’ a random local number on caller ID because of the laws that allow your local politician or doctor’s office to outsource their calls to you and ‘spoof’ the number they want you to see should you want to call them back.
So I get a lot of what looks like local calls to my cell and my desk phone at work that turn out to be cold calls with robots at the other end. I would really like caller ID to somehow be able to tell me that ‘while this appears to be a local phone call, it’s actually coming from a call center in China’. It’s hard to believe that Verizon doesn’t have access to that information.
Verizon does warn me (on my cell) of some suspected spam callers.
The FCC should just require phone companies to prevent caller-ID spoofing, while allowing customers with more than one landline number to arrange which number outbound landline calls are considered to originate from. Caller-ID blocking should remain allowed, to allow for the use case where anonymity is required.
(The canonical example of this is calls from a women’s shelter. Showing no number is just as effective at preserving anonymity as spoofing the number.)
[Epismi-whatsit status: Most of what I ‘know’ of Britain is from watching episodes of Are You Being Served, Black Adder, and Dad’s Army so I really have little to support my thinking]
An idle thought sparked in part by @fion’s U.K. newspaper sub-thread: The current U.K. governmental “Left” and “Right” fit more the 1940’s U.S.A. Left and Right (Truman vs. Dewey) than the current U.S.A. left and right.
The Tories are Eton/Cambridge/Oxford which is roughly equivalent to the U.S.A. “Ivy league”, Labour is led by Jeremy Corbyn and still has ties to labor union left.
In the U.S.A. some Texas Oil Millionaire funding and the ascendancy of Barry Goldwater gave the U.S.A. “the Right” a different non-Ivy cast in the mid 20th century, and the non puplic school teacher labor-left has been impotent since the late 20th century, with the governmental Left being more collegiate now.
I’m sure there’s holes in my scheme, which you may pick apart.
The Tories are Eton/Cambridge/Oxford which is roughly equivalent to the U.S.A. “Ivy league”, Labour is led by Jeremy Corbyn and still has ties to labor union left.
From my understanding, Jeremy Corbyn is kind of the last gasp of that subset of the left though and isn’t really representative of their membership as a whole (although that fact might be wrong since he’s somehow kept his grip on the party). My understanding is the Labour’s leadership is as much or more Tony Blair (oxford) as it is Corbyn. Furthermore, Corbyn himself is from, if not the upper class, at least upper-middle class.
True, Tony Blair didn’t seem remotely working-class, not even in the “Michael Caine/Maurice Micklewhite Jr. seems ‘posh’ to Americans sense, while Blair’s American equivalent Bill Clinton could turn on an Arkansas accent sometimes and wasn’t born rich (though he did go to Oxford as well, and later became rich).
Also on the American side Truman’s predecessor Roosevelt was very patrician and Derek’s successor Eisenhower was more middle-class, but it’s more a tendency, I have a hard time imagining the Democratic Party nominating someone like Jimmy Carter much today (Sanders is interesting, just about the only urban working class candidate I can think of after maybe Al Smith), on the Republican side, even though his candidacy was recent, I have a hard time imagining someone like Mitt Romney being nominated again soon.
The basic way I like to explain English politics to Americans (though I’m an American myself). Imagine if the 19th century Populist Party had not been absorbed by the Democrats and had evolved into a full on socialist party. Then Britain went through a period of three party system between the Liberals (Democrats), Tories (Republicans), and Labour (Populists/Socialists). But Labour slowly (and then very quickly) ate away at Liberal support to the point the Liberals are now a minor party. British politics is thus basically Republicans against Socialists. Keep in mind socialists are not ‘Democrats but more radical’. They are a distinct group. For example, the Liberal Democrats remain more pro-EU and pro-immigrant than either Labour or the Tories.
This means British politics tended to swing more radically than US politics. It basically switched between conservative monarchist capitalists and nationalizing leveler socialists until roughly the 1990s-2000s. (Worth noting: both sides remained committed to democracy and basically allied with the United States, which ties into the Indian thread on how socialism does not necessarily lead to dictatorial Communism. And also doesn’t cast a great light on people who honeymooned in Moscow when there were vigorous socialist movements in western Europe.) Labour lost ground in the ’70s, partly due to internal conflicts: the party in control of the government and more radical union heads clashed several times. There was also a right wing resurgence in the 1980s, so Labour had to move to the center to remain competitive. These days, it appears to be swinging back to the left.
Also, something I want to highlight: going to a good school didn’t used to be a precondition for the presidency in the US, especially for conservatives. The past five presidents have all had Ivy League educations and that is the longest streak in history. If you exclude those last five (Bush I through Trump), there were fifteen Presidents in the 20th century: ten Republicans and five Democrats. All five Democrats had Ivy League educations. Two of the Republicans did. Eight did not, though they all had college educations. This is the opposite of England where (as you identify) Etonian education and the like is much more of a conservative marker.
That’s seems a good tutorial on the British, thanks!
On your history of recent Presidents schools I’ll add that U.S. Supreme Court Justices didn’t used to all go to Harvard or Yale, one that was nominated in the 1940’s didn’t even go to law school, he apprenticed as a paralegal (it’s now rare but some still become attorneys that way).
Exactly. That was one big reason I was disappointed when Trump failed to nominate Amy Coney Barrett. (The other one was that I consider her conservative Catholic ties a plus, in that they’d make her more sensitive to religious liberty issues.)
You’ve left out the part where the socialists and Social Democrats split, and the Social Democrats merged with the Liberals (which is why they are now the “Liberal Democrat” party and not just the “Liberal” party).
A nitpick: Harry Truman did not have an Ivy League education, or even a degree. He took classes in business and law from a couple different schools in Missouri, but never graduated.
Labour under Blair was much closer, ideologically speaking, to the US Democratic party of the time. The difference today is that in the UK the party members decided to put a socialist in charge, whereas in the US they just came close (see the 2016 Democratic primary).
@broblawsky,
True, and Labour going socialist is returning to it’s roots, while the closest the U.S. got to socialism (and indeed Mussolini style fascism while also fighting fascists) was during the second world war when Henry Wallace was still Vice-president.
Anyway, my larger point (if I had one) is that both in style and substance the U.K. and U.S.’ ‘left and right’ really don’t seem to match up one for one.
Off the top of my head the (right) Christian Democrats of Germany looks like the moderate wing of the U.S. Democratic Party, and the (left) Social Democrats of Germany look more like the progressive wing of the U.S. Democratic Party combined with the British Labour Party, with no real equivalent of our Republican Party.
Different U.S. states parties used to be different aa well, the Minnesota Democratic–Farmer–Labor Party used to be quite different from the Mississippi Democratic Party, but now they’re pretty much identical to each other and the national Democratic Party, New York Republicans used to be different than Arizona Republicans, et cetera.
Different U.S. states parties used to be different aa well, the Minnesota Democratic–Farmer–Labor Party used to be quite different from the Mississippi Democratic Party, but now they’re pretty much identical to each other and the national Democratic Party, New York Republicans used to be different than Arizona Republicans, et cetera.
I think there are still large differences between different regions in the two parties, although these do not consist of large regions anymore but mini-regions. I worked with a guy a few years ago who was a Democratic activist from Atlanta who moved to Minnesota for a job. He was living in Minneapolis, but when he went to a Democratic caucus in the city, he was shocked by the ideology there. He told me they were essentially all socialists, which sounds consistent to me with the usual politician rhetoric I hear around here. I don’t think he would have had the same reaction if he had gone to a suburban Democratic caucus, and certainly not if he’d gone to a rural one in Minnesota. So there still are great differences within each party in different areas.
and indeed Mussolini style fascism while also fighting fascists
One of the historical facts that tends to get left out of the usual American picture is that FDR was generally pro-Mussolini before the war and that his economic approach in the first New Deal was essentially fascist. We are used to thinking of “fascism” as a term of abuse, but that isn’t how it was seen in its early years.
I feel like aggressively conservative folks emphasize that a lot, which is why the “fascism is right-wing/fascism is left-wing” argument always heats up.
I think that someone here (perhaps you) called the New Deal “the greatest hits of fascism and communism”, which I found pretty accurate.
What’s interesting to me is that the historicity argument is so disconnected from actual policy. On the right people use socialist as an epithet, on the left some people embrace it as a badge of honor, but the socialist policy package is universally rejected. The number of people that actually want to nationalize google or walmart round down to zero. Likewise while you can get plenty of people to praise the New Deal, and even some that want to revive the WPA, you don’t find anyone that wants to set up industry councils where wages, prices, and production volumes are hashed out by agreement between workers, owners, and the government.
(All from a US perspective. There may well be non-trivial numbers of actual socialists and (economic)fascists in other countries.)
How does the New Deal materially differ from elements of feudalism?
And during the times of feudalism, was there a left wing and a right wing? Perhaps it came down to who owned the land (non-militant religious orders = left wing, lords = right wing?; agricultural land = right wing, cities = left wing?)
The number of people that actually want to nationalize google or walmart round down to zero.
This sort of assertion never felt very fair to me. While I can agree that Warren, Sanders, et al. aren’t saying the N-word, they’re still saying they should be able to do all sorts of things that end up meaning the same thing. They still distrust any economic agent that makes a profit, and act consistently with a belief that they should be able to approve or disapprove of any source of that profit.
Meanwhile on the other side, what you are saying, and I’ve heard it before, strikes me as very unfair. In my mind there is a huge gulf between ‘is skeptical of profit and in favor of lots of regulations’ and ‘wants to collectivize the means of production’. Not sure how to bridge that gap.
In my mind there is a huge gulf between ‘is skeptical of profit and in favor of lots of regulations’ and ‘wants to collectivize the means of production’. Not sure how to bridge that gap.
Well, to bridge it, you’ll have to convince me that Warren, Sanders, et al. don’t want to collectivize the means of production. Given that Warren keeps saying she wants to do things that will functionally result in collectivizing the means of production, and Sanders, among other things, applauded Chavez for actually collectivizing the means of production, I admit, you have a hard task ahead of you.
Merely stating there’s a huge gulf in your mind won’t be enough; surely you can see why not?
“You built a factory out there, good for you. But I want to be clear. You moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for.”
“Now look, you built a factory and it turned into something terrific or a great idea, God Bless, keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and paid forward for the next kid who comes along.”
“I hear all this, you know, ‘Well, this is class warfare, this is whatever.’ No. There is nobody in this country who got rich on his own – nobody.”
“Other countries around the world make employees and retirees first in the priority. For example, in Mexico, the bankruptcy laws say if a company wants to go bankrupt… obligations to employees and retirees will have a first priority. That has an effect on every negotiation that takes place with every company in Mexico.”
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
“You built a factory out there, good for you. But I want to be clear. You moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for.”
Not a normative claim at all.
“Now look, you built a factory and it turned into something terrific or a great idea, God Bless, keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and paid forward for the next kid who comes along.”
Is a normative claim! But the normative claim is “taxes should exist”… not equivalent to “seize the means of production”…
“I hear all this, you know, ‘Well, this is class warfare, this is whatever.’ No. There is nobody in this country who got rich on his own – nobody.”
Also not a normative claim.
“Other countries around the world make employees and retirees first in the priority. For example, in Mexico, the bankruptcy laws say if a company wants to go bankrupt… obligations to employees and retirees will have a first priority. That has an effect on every negotiation that takes place with every company in Mexico.”
Being pedantic, not a normative claim. If the implied claim “we should change our bankruptcy laws to be more like Mexico’s” counts as “collectivising the means of production” by your definition then you really should’ve said so before, since that’s certainly not the usual ones. Not doing so makes it seem like you’re trying to smuggle in the various connotations of the kolkhozes etc. to an obviously incomparable situation.
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
N o t a n o r m a t i v e c l a i m
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
Kind of a normative claim I guess, but it hardly seems specific enough to be an example. Unless you’re saying that anything other than “obsession with maximising shareholder returns at all costs” is collectivising the means of production.
Every claim you assert as not normative either implies one, or is most easily explained as motivated by a normative belief Warren holds, that government ought to play the parent to wayward private interests. The only way she seems to see to do that is to collectivize various parts of the economy, whether she’ll admit it or not. You can argue that she doesn’t want absolute collectivization, but that’s very faint praise to an audience that thinks we’re already collectivized to the point that any additional amount is harmful to people, and doubly so if it’s coupled with shaming.
For example, Warren says taxes should exist. She also implies they’re a moral good, and conspicuously fails to favor any limit on them, or even to consider the possibility that those limits might already be surpassed – her quote is a moral sneer at anyone who thinks they are.
“Maximize shareholder returns at all costs” is a common bogeyman I see from the left. Sure, capitalists believe this is the role of a CEO. But that’s only in the context of a larger system in which corporations are necessarily limited by the ability of consumers to go elsewhere with their business. If the world’s most ruthless CEO is compelled to maximize shareholder returns by offering a product so efficient that it provides every consumer a better return than if they do without, then suddenly “maximizing shareholder returns” sounds like a really good thing.
If a CEO were allowed to force consumers to buy the corporation’s product, then we’d have a problem. But that’s a problem only if no one else is allowed to offer that product, and that’s only the case if some external body made a rule forbidding it, and forced everyone in the market to follow that rule. But no matter how you slice it, that wouldn’t be capitalism. If Warren is truly capitalist to her bones, she’d call to abolish such rules rather than try to set up even more, but I’ve heard her make zero noises about doing so, which forces me to believe she’s fibbing about being capitalist to her bones.
You can argue that she doesn’t want absolute collectivization, but that’s very faint praise to an audience that thinks we’re already collectivized to the point that any additional amount is harmful to people, and doubly so if it’s coupled with shaming.
Right, so as I suspected you are equivocating between collectivisation and any kind of taxes and regulation. You must surely be aware that “collectivisation” and doing things to “the means of production” have certainconnotations. Therefore I must conclude that either:
1. You are badly mistaken about which of those connotations are widely considered salient. If so, you should know that most people consider an association with Marxism, dictatorship and mass deaths etc. to be the salient features, and you will therefore be communicating ineffectively if you use “collectivising the means of production” to refer to completely different things.
2. You have some argument about how Warren’s policies would lead to millions of deaths if implemented.
4. You’ve declared your own definition of what’s salient, that fails to address the genuine annoyance of people with sanctimonious attempts to tell them what they can do with their wealth;
ignored the examples from history of the wealth-destroying end wealth-creation-avoiding economic incentives that such policies put into play, which can result in mass starvation if a society doggedly continues such policies, though likely will just result in their eventual repeal and a lot of wasted time;
and made a rude accusation of dishonesty with a bad faith argument that makes me very uninterested in indulging your approach to discussion.
You’ve declared your own definition of what’s salient
No I haven’t. I’m saying that everyone except for (or, I strongly suspect, including) you associates “collectivising the means of production” with kulaks not taxes. This is an empirical claim, not like my opinion man. Do you seriously disagree with it? Or was 3. on the mark?
I think taxes are so far away from the central example of “collectivizing the means of production” that we aren’t even speaking the same language. To be fair, I don’t think that mass starvation is required either. The central example would be a call to nationalize some company or industry, something that is conspicuously absent.
It’s perfectly fine to be some kind of ultra-libertarian, but it isn’t so great to argue semantics using private definitions flowing from the ultra-libertarianism. Self awareness of idiosyncrasy seems like a reasonable ask in a conversation.
Her persecution of Backpage also bothers me, though admittedly most of the candidates look bad on similar issues. But her campaign has had plenty of attention and plenty of time to catch on, and it really hasn’t; I think if somebody’s going to surge late and be a surprise, it’s going to be somebody else, not Harris. And I don’t really expect that from anyone, really. I think it will be Warren, though obviously the long time and still current frontrunner Biden can’t be counted out.
I’ve seen a couple of articles about it on left-ish sites, but the sad fact is that at this point “American troops are somewhere in the Middle East” is no longer a particularly significant story. Impeachment is more relevant and more attention-grabbing right now.
Has anyone checked out the recently-released 3rd episode of Canadian-Wilderness survival game The Long Dark?
I’ve been busy this week and haven’t gotten to spend more than a couple of hours with it yet. I whfg tbg gb gur cynarpenfu naq sbhaq gur fheivibe.
I’m not really sure what I think so far. I recently replayed the previous two episodes and mostly enjoyed them, although not as much as I did on my first play through. I know I’m pretty near the start of the episode, but there are some things I’m not liking so far:
1. There are a crazy number of wolves. Within the first 20 minutes of the game I was being stalking by 4 at a time. With how dense these packs are around places the game forces you to go, I constantly feel like I’m forced to play the game with a gun in my hand all the time which I feel weakens the experience kinda.
2. I haven’t like the timber wolves so far – it seems like they demand a level of aiming to deal with that the game isn’t optimized for.
3. I don’t like the change in tone. Episodes one and two were all about mostly being alone in a quasi-mystical apocalypse. The characters that you did meet were all archetypes and seemed more like characters out of a myth than real people. They’ve changed that here to being more grounded with other characters, and I don’t know if that fits the overall aesthetic
Any thoughts on the final Indonesian report on the Lion Air crash last year? I caught the tail end (sorry, couldn’t help myself) of a radio piece; it sounds like the ground crew broke a sensor, a previous flight crew handled the problem just fine but didn’t report it, and the final crew was undertrained and so weren’t able to remain in control? My sense is that this casts the whole situation as more of a series of preventable errors than the all-Boeing’s-fault narrative I was getting.
Nice! I fell in love with this record when it first came out in May (today’s release is a rerelease on Napalm Records), but I never caught the stoic edge of the lyrics, just that sweet sweet melodeath. I’ll have to pay more attention to the lyrics next time I listen to it.
If you like melodeath with excellent lyrics, Aether Realm’s Tarot is essential.
What struck me was that “the” is 6% of the words used in English. A lot of languages don’t even have “the”! It seems weird that we make so much use of a word which may be unnecessary. Do the languages without “the” make the “the/a” distinction some other way? Or do without it?
Are there are other extremely common words which aren’t universal?
We actually had a lot of discussion on a/the over the past few open threads.
My contention is that articles are virtually entirely useless in the vast majority of cases in which they’re used (fun game: try and think of sentences in a realistic setting where the meaning of a/the isn’t apparent from the context).
Others pushed back on this, but there seemed to be pretty much agreement that they were not necessary in many cases.
Languages which don’t use articles or an equivalent grammatical construction (many East Asian languages, for example) will have to resort to using some kind of phrase in their place, in the rare cases where it actually makes any difference.
If you’re reading Moby Dick, you know what whale he’s talking about. When there’s the potential for ambiguity, “the” is usually referring to items that have already been discussed in the very recent past, so you know which of the possible referents it refers to.
Consider that in my previous sentence, “the” appears 3 times outside of quotation marks: now try removing those occurrences from the sentence and explain to me what meaning has been lost? It sounds weird because we speak a language where we have to use it, but that’s just a feature of the language.
I’m not claiming articles don’t specify meaning, just (a) usually it’s unimportant to the speaker’s intentions (again, remove “the” before “speaker’s” just there – what have we lost?), (b) when this meaning is important, it’s usually available from context, and (c) when the meaning isn’t available from context, you can always use a phrase or sentence to specify what you mean. As proof, I submit to you the fact that probably the majority of people in the world use languages without articles, and they seem to be able to communicate pretty well.
If you’re reading Moby Dick, you know what whale he’s talking about.
Did you mean “the whale”? Arguably, “what whale” could just as easily refer to a species of whale, rather than to a specific whale.
Your points are well-taken (and this is all for fun, anyway). But how about another example : “Did you cut down the tree today?” This is basically to say, “There is a particular tree you were going to cut down, and I would like to know if you cut that particular tree down today.”
“Did you cut down tree today?” feels awkward, but preserves essential meaning.
“You cut down tree today?” removes the arguably-unnecessary did, if the speaker is addressing the treecutter after the anticipated time of treecutting.
“Cut down tree today?” removes you in favor of context, as hopefully the speaker is addressing the person who was to do the tree-cutting.
“Cut tree today?” because arguably to cut a tree “up,” that is, into discrete pieces, is just as correct as to cut it “down,” and therefore each is made irrelevant by the other?
“Cut tree?” because hopefully the speaker is addressing the treecutter in a manner that is relatively time-local but also antecedent to the anticipated treecutting.
“Cut?” because it is likely that the magnitude of cutting down a tree would preclude (or simply dwarf in importance) the acceptance of other cutting-type tasks, so context tells us that treecutting is likely the only cutting that the speaker would be asking about.
“?” because for God’s sake how many times do I have to ask for you to cut down the stupid tree?
Well, to be honest, communication in this old marriage situation you’re alluding to doesn’t need any words. A mere glance, or ffs, a lack of glance or a lack of noise conveys the “Did you cut down the tree today” well enough :p 😀
To pedantically reply to a Bill Murray line… imagine a language without articles where he says something like “I’m one god, not only god”.
It’s not quite as pithy, but the meaning is there. (Well, it occurs to me that the meaning of “only” is ambiguous because it can mean both “the one example of X” and “entirely composed of X” – let’s pretend it only means the former.)
try and think of sentences in a realistic setting where the meaning of a/the isn’t apparent from the context
This is actually hard to do with many/most words, especially grammatical words, because normal language is so redundant. If you mishear a word that someone spoke to you in conversation, you can usually reconstruct it by the end of the sentence.
Asking about this brought up a somewhat example of this in Uralic language, which all exhibit a feature of the third person possessive suffix denotes definiteness as well, except the Baltic Finnic languages (Finnish, Estonian) and Hungarian, which seem to have lost that feature (the fact that it’s present in all the other branches imply that it was a feature of the proto-language).
That said it’s the kind of thing which is difficult to prove anyway: most languages don’t have an extensive written history (and for most of those who do, it’s rarely much more than a handful of centuries), and it’s really hard to reconstruct a lost feature from contemporary forms alone, so this is a particular case where absence of evidence is only weak evidence of absence.
I seem to recall a general principle in languages that the frequency of usage of words is inversely proportional to their information content. So the most common words tend to be words like “the”, “a”, “and”, “is” which convey very little information and indeed can often be left out entirely in many languages (a lot of languages don’t bother with “the dog is black and white” and just say “dog black white”).
That doesn’t mean that “a/the” conveys no information of course. Information conveyed by the article can include notably:
—New vs old information; “I saw a man [new]; the man [old] told me that…”
—Hypothetical or generic vs real referent; “A man could do this [any man, the idea of a man], but the man who did that… [the actual, real person]”
—Specific vs universal; “A boar can gore a tiger [claim about the potential of individual boars]; the siberian tiger is a fierce predator [claim about all siberian tigers]”
Languages that lack such a distinction may have other strategies, like:
—Word order; Russian tends to put new information near the end of the sentence, and old information near the beginning.
—Use of demonstrative adjectives, which can weaken over time and become definite articles — this is actually what happened in Germanic languages (including English), Romance languages, Greek and South-Eastern Slavic languages like Bulgarian — the, der, le, el or o are all etymologically going back to definite articles.
—Use of a “topic comment” structure which marks a particular word in the sentence as salient, eg Japanese normal sentence “gohan o tabemashita” rice [object] eat[past] “I ate some rice” vs “gohan wa tabemashita” rice [topic] eat[past] “I ate the rice; speaking of the rice, I ate it”.
You can think of “the” as a grammatical element (which just happens to be a word rather than an affix), like the past tense suffix “-ed” or the plural suffix “-s”. A characteristic of grammatical elements is that they have to be used where they are applicable, rather than only being used when the speaker explicitly wants to convey that information, and therefore are a lot more common than they may need to be. Languages without a given grammatical element will generally have some non-grammatical way of indicating the same distinction, the difference being that the speaker has the choice as to whether to use it.
Do the languages without “the” make the “the/a” distinction some other way? Or do without it?
Articles in English are mostly semantically weak: in most cases they don’t convey any meaning that can’t be inferred from the context (which is probably why the the trick works). In the rare cases where they do convey meaning, you can replace “a” with “some” and “the” with “this/that”, preserving the meaning.
Most languages, even those without articles, have determiners (either function words or affixes) that represent these meaning distinctions. E.g. in Latin: “Persuāsīt populō ut eā pecūniā classis aedificārētur” – “He persuaded the people that a fleet should be built with the money (with that money)”
Are there are other extremely common words which aren’t universal?
I think my favorite part about that wikipedia article is that I don’t even agree with some of the English examples of usage of such a basic word as “Yes”.
if asked, Are you not going? (行かないのですか? ikanai no desu ka?), answering with the affirmative “はい” would mean “Right, I am not going”; whereas in English, answering “yes” would be to contradict the negative question.
I thought that answering yes in English to that question meant “Yes, I am not going”. Although I would usually view “yes” as somewhat ambiguous and would say “yes, I am not going”.
Although I think “no, I am not going” means the same thing as “yes, I am not going”.
It’s not 16 different words, it’s different forms of the same word. Word frequency analysis in English would not, say, count “be”, “am”, “are”, “is”, “was”, “were”, “been” as different words either.
I hope every President does it to pack the court and it becomes an out-of-control arms race with a SCOTUS bigger than the Congress in a few decades. By 2100 every citizen is part of the SCOTUS. Of course it’s no longer practical for it to gather in one place so they elect a guy who names a nine-person subcommittee to do the actual legal work, and the rest of the SCOTUS just goes along with it.
Robert Bork received confirmation hearings and a floor vote, which he straight up lost 42-58, including no votes from Republican senators Lincoln Chafee, Bob Packwood, Arlen Spector, Robert Stafford, John Warner, and Lowell Weicker, as well as all Democratic Senators.
But Ted Kennedy was mean in his speech that accurately cast Bork as an extremist even compared to other conservative judges, which is clearly just as bad as Mitch McConnell refusing to hold either hearings or a vote.
As I read him @Jaskologist isn’t suggesting the two events are equatable, he’s suggesting the inevitability Brad is referring to can be traced back to Bork.
Why would it be inevitable? Jaskologist may or not be in agreement with Republicans who think there was anything untoward about a judge losing a confirmation vote, but it’s an indictment of them, if not of him, if that was the start of a downward spiral.
no votes from Republican senators Lincoln Chafee, Bob Packwood, Arlen Spector,
It’s times like this I wish nominative determinism was a stronger force.
Senator Lincoln Chafee, who looks like Abraham Lincoln constantly scratching himself.
Bob Packwood, who packs wood into trucks women. That one works!
Arlen Spector, who serves his terms after death.
I mean we could say “post eden” world too. But I don’t think the Bork escalation was a) a first cause b) the proximate cause, or c) the most important cause.
The Republicans retaliation to Bork was Thomas. They nominated a very conservative and controversial jurist and crammed him through.
The Democrats retaliation to Thomas was the filibuster under Bush II. They blocked many qualified jurists in the lower courts.
The Republicans response to the filibuster under Bush was Garland.
Those are all relatively proportional retaliations.
Moving to “packing the Court” would be a dramatic escalation. Not that I don’t think the Democrats would, but there is a big difference between “Tit for tat” and “nuke for tat”.
A proportional retaliation would be for a Democratic Senate in 2020 or 2022 say that they wouldn’t advance any of Trump’s nominees.
Lame duck has generally referred to the period between a presidential election where the incumbent isn’t running / didn’t win and the inauguration of the next president. Not the entire second term of a presidency.
And no, altering SCOTUS is in no way proportional to any alteration at all of lower courts.
I’d be inclined to disagree in cases like Estrada, where the lower court filibuster was primarily driven by not wanting the nominee to be a possible future Supreme Court nominee.
I’d put the inevitability back further than Bork, though–I think the giant Cathedral power-grab of the 1950’s and 1960’s managed through the Supreme Court made the court a political body, and the escalation since was at that point inevitable.
@EchoChaos
Usually a have high conviction about ought and low conviction about is. Ought after all lives mostly in my head. In this case that’s flipped. I don’t have especially strong feelings about whose fault it all is or whether this or that escalation is a reasonable one; but I’m pretty sure 1) Court packing is coming and 2) would be further off if Garland had been confirmed, and probably even if he had gotten hearings.
Both of those seem true statements to me, although part of the reason they are true is that turning the Court from leaning left to leaning right on social issues (it has leaned mildly right on fiscal since about Reagan) is a BIG DEAL to the left.
Garland would’ve put that day further out, perhaps indefinitely. Gorsuch and especially Kavanaugh bring it further in, if it isn’t here now.
Lincoln Chafee was in the Senate 1999-2007. He inherited the seat from his father, John Chafee. Then he was reelected and served a complete term. John Chafee was in the Senate from 1976 to his death and voted against Bork. Your other names are correct. While we’re at it, David Boren (D-OK) and Ernest Hollings (D-SC) voted for Bork.
It seems like a dumb idea, but right now I’m optimistic that it’s dumb enough that nobody will actually try to do it, rather than so dumb that it will inevitably be done.
Like the idea of President Obama paying for his various plans by ordering the mint to make a trillion-dollar coin.
I would like to see it increase in size for the reasons mentioned below, and by means mentioned below:
Means:
Optional 1) The office of the President periodically polls members of federal and state courts and bars (i.e. justices and lawyers) on who they recommend for a SCOTUS appointment, and why.
Semi-optional 2) From those people who have multiple recommendations (preferably from multiple court districts) the office of the President selects a half dozen or so candidates that they find preferable.
3) These half dozen or so candidates are submitted to the current Senate, who investigate, advise, and vote on them. The candidates who receive greater than 50% of the vote are next in line for an open SCOTUS seat, in the order in which they received the most votes (or as determined by an additional vote of the Senate if there is a tie).
With the exceptions of openings that occur immediately after an election, this will help prevent lengthy periods of absent Justices, and will likely prefer more moderate candidates (whether that’s a pro or a con I don’t know).
Note: For the initial increase in membership I’d like either a Senate supermajority requirement for each new member, some sort of judicial democracy that accounts for the wishes of the political minority, or increase it by an even number and allow each “party” in the Senate to select a candidate on a 1-to-1 basis, with the President agreeing to “nominate” said candidates, and the entire Senate agreeing to confirm said candidates as a group.
Reasons:
Originally it was intended that there would be a SCOTUS judge per district court, so that the SCOTUS judge could ride their district when SCOTUS was not in session. Well, we’ve got more than 9 districts now.
We’ve got a lot more people, and I would like a bit more diversity on the court (all meanings of the word). Mandatory term limits could help this (though placing the former Justice into one of the Federal District courts instead of actual forced retirement would likely serve the purpose of the founders in having no term limits), but so would increasing the number of Justices and letting demographics take their course.
We’ve got a lot more laws and cases, and I would like more of these cases taken up by SCOTUS than currently occurs. Settled law, that is universally applicable across the nation, is preferable to patchwork laws. To these ends I’d like SCOTUS to have the ability to subdivide its members into groups of 5 or 7 or some odd-number to hear and decide cases. Following the decision the decision would immediately be reviewed by the full court, requiring a majority of the SCOTUS to vote to re-decide (or even re-hear) it, otherwise the decision stands in as binding a manner as if the full SCOTUS decided (this would decrease those instances where various circuits have contradictory rulings). Increasing the number of Justices to 15 or so would allow this sort of framework.
I don’t think it’ll happen. Federal law limits the size of the supreme court, so you’d need the president, supermajority of the senate, and majority of the house to all agree it’s a good idea. That means all those people being the same party (itself unlikely) and also all those people not worrying about what happens when the other party takes power.
The supermajority of the senate requirement is at the sufferance a majority of the senate, and is not long for this world IMO.
It is true that in order for this, or much of any new legislation going forward, both Houses of Congress and the White House need to be in the hands of one party. This tends to happen the first Congress when a new President takes over. The last few Congresses where it was true were: 115th (Trump), 111th (Obama), 109th (GWB), 108th (GWB), 103rd (Clinton). The 119th Congress (Jan 2025) would be the one I’d put my money on for packing the Court.
One of the difference between French and American culinary practice is that the French tend to be conscious of the seasonality of produce. Local fruits and vegetables are usually better than those that have to be imported from afar, but they are only available during some parts of the year, toward the end of the growing season. As I understand it, the French tend to be aware of this, and gear their cooking to what is available, whereas Americans (and Canadians, for that matter) just get produce from wherever (local or imported) and cook the same stuff year round.
That raises the question of what the French do when local produce isn’t available, such as during the northern-hemisphere winter. Do they eat old-style preserves? Or do they put on T-shirts and baseball caps, LARP as American idiots, and buy imported veggies?
I think that eating the same stuff all year is a fairly recent development. I remember being a kid in the ’80s and the food we ate was heavily influenced by the season. Also, most Americans live within a day or two of places that can grow produce year round, so it makes it a lot easier to eat the things you like all the time. I don’t know if France has that same access to high quality produce.
It also seems that France’s food culture is stronger than America’s, so they are going to be slower to change the way they eat in a given season, even if summer vegetables are available.
You seem to have a very negative opinion of American food culture. May I ask which parts of the US you’re using in this comparison?
As a resident of the northern midwest, seasonality of produce has exerted a strong influence on my menu for my entire life. Sausage stuffed zucchini is delicious, but is only generally made in mid-to-late summer (when the zucchini have grown large enough for proper stuffing). This is despite the fact that meat-and-starch-stuffed-squash, as a heavier and richer dish, fits more easily into a cold-weather menu. As we get towards fall, the stuffing recipe changes and we start stuffing pumpkins instead (the pumpkin stuffing involves cranberries, which balance nicely with pumpkin but would tend to overpower the milder-flavored zucchini). Once pumpkin season has passed, the squash in our diet changes to mostly spaghetti and other winter squashes, but those don’t tend to stuff as well, so those recipes don’t usually come out again until next year’s zucchini is ready.
These trends are even more pronounced with fruit. I don’t think many Michigan residents makes it past five years old without knowing when apple season is and what recipes only get made then. Traverse City’s biggest tourism event is centered around the cherry harvest.
This isn’t to say that we never use imported or preserved produce, but casting this as “Americans don’t know the difference between seasonal and non-seasonal produce” seems like a huge jump that I have a hard time imagining justification for.
I felt this was a poignant description of the appeal that fiction has:
No matter how clear things might become in the forest of story, there was never a clear-cut solution, as there was in math. The role of a story was, in the broadest terms, to transpose a problem into another form. Depending on the nature and the direction of the problem, a solution might be suggested in the narrative.
If the inconvenience of choosing the vegetarian option is less than X, you must. Otherwise you may eat meat. The factors of inconvenience can include the price, taste, effort of asking etc.
Yep, I basically do that, mostly to work around the problem of the “token vegetarian option”. If I have to choose from a menu of 10 things, but the one or two vegetarian things don’t appeal to me at all, then the other 8-9 choices with superior taste are just there, taunting me. It requires a lot of willpower to resist that, and I don’t consider it 100% my fault when I give in occasionally. Society takes at least part of the blame by consistently presenting me with temptations, exposing me to peer pressure, and making everyday tasks like shopping considerably more burdensome [1].
The flexibility I give myself makes it possible to maintain a long-term commitment even in the face of ego depletion. If it was “all or nothing”, I’d choose nothing. Being 95% of the way there is a huge improvement over that baseline.
I still call myself a vegetarian. It leads to less confusion, and I know with a high degree of certainty that if you just dropped me into a 100% vegan society, I’d very easily adapt and wouldn’t miss a thing. I still have some uneasiness about being called out for “pretending” to be a vegetarian etc, even though this has never actually happened.
[1] Prepared food in particular can be a highly inefficient market. There are products with a high number of properties, you are offered a tiny subset of all possible combinations of those properties, and on top of that it’s often hard to know in advance what you’re getting. This is fine in situations where most properties are either “yeah!” or “meh”, but gets very troublesome as soon as one or more properties become a hard “no”, which happens with meat, but also stuff like food allergies, strong aversions to certain tastes/textures, etc.
If I have to choose from a menu of 10 things, but the one or two vegetarian things don’t appeal to me at all, then the other 8-9 choices with superior taste are just there, taunting me.
A person can always ask for a substitution. This doesn’t work at places like Cracker Barrel, but does at Taco Bell, and presumably many more upscale restaurants.
I’m reading Vaclav Smil’s “Creating the Twentieth Century” and struck by the extraordinary profusion of innovations that came out of the last two decades of the 19th Century (the middle of the period he describes). It struck me too that you could make a good argument that the art, music, literature, architecture etc of that time were similarly extraordinary, with an unusually high number of timeless classics per year– admittedly my personal affection for Dvorak, Sibelius, Art Nouveau/Secession, ragtime etc biases me here.
And yet the political and economic landscapes of that era were horrible. The Long Depression and the labor wars; the rise of authoritarian nationalism, anarchist terrorism, and brutal imperial reaction to the prior two; the spread of socialism among the intellectual classes; the institutional entrenchment of pseudoscientific racism. Must have been a frightening time to live through.
I think you could say the same about the interwar period on all counts: extraordinary technological and scientific progress, well above average artistic creation (with the same caveat about personal taste biases), and of course the political and economic horrors go without saying.
My questions for the room are:
1. What’s the probability that we are living in a similar time today, i.e. a time that 100 years from now historians (conditioned on historians still existing) will view similarly along these axes to the Long Depression and Great Depression eras?
2. If we are living in a similar time, what should one do about it? More specifically:
(a) selfishly, what lessons should we learn from those times about how to insulate oneself from political and economic problems?
(b) less selfishly, what lessons should we learn about how to capitalize on the unusually great opportunity to be part of technological and artistic flowerings that will greatly benefit future generations?
I think your period is too short. The late 1800s are, to my mind, a solid improvement over the mid-1800s, which had civil war, regular war, and massive social and economic unrest across all of Western society – if the worst we have to deal with in 1885 is a few anarchist bomb-throwers and some intellectuals talking with each other about this new guy Marx, I don’t know that 1880s Me would agree it was all that scary a time to be alive.
Is the feeling on this board that high? Either a majority of American posters here agree with that statement or we are wildly unusual. Both are plausible, so I’m curious which it is.
From my perspective, I view a second American Civil War as between very and extremely unlikely. The current political situation doesn’t have a natural center for a non-Federal power to emerge like Richmond was in the 1860 and neither side is anywhere near actually taking up arms. Both still have faith that the political process will come to some sort of accord.
Edit:
@drunkfish has some concerns about the reporting on this polling. Please read his comment before responding. Thanks for digging in.
I say we are wildly unusual, I see it as having a ~0% chance of occurring in the next eight years. I think popular concern about it comes down to three things:
1. People watching the news and not understanding journalism’s incentive structure.
2. People who know it’s over-hyped but who due to social desirability bias want to appear “concerned” in order to distinguish them from the idiots who don’t pay attention to politics at all.
Apparently the judge overturned the jury’s decision. But the initial decision was actually horrifying and should shock and scare everybody. I dont know at what age the kid would have started taking blockers but this shouldnt be a reason to accept this nonsense.
Or was this your point? People get enraged at things at the very first news stories and resist listening to corrections that their enemies are not pure evil? CS Lewis had some words about that.
There is a correction of sorts in the middle of the post, “(Initially I thought the medical intervention would occur now, at age 7 — sorry about the error in my initial post, but is age 11 really any better?)” ideally it would be in red letters at the top.
“Or was this your point? People get enraged at things at the very first news stories and resist listening to corrections that their enemies are not pure evil? CS Lewis had some words about that.”
No, the “correction” doesn’t really make the story any better. This is a common tactic when you don’t want to directly defend something: find some detail about the original claim that is incorrect and then declare it “fake” or “clickbait.” The gist of the story was entirely accurate.
I don’t like childhood transition. (I suspect that in a decade when we have then-adults who desist and say they were reacting to their parents explicit or implicit choices that are happening now.) But this case is at least 3 years from anything happening.
We give parents wide latitude to raise their kids, including making bad choices.
I think that headline might be *wildly innaccurate* (to the point that if you see this in time I think you should probably edit your post).
The first two lines are:
Partisan political division and the resulting incivility has reached a low in America, with 67% believing that the nation is nearing civil war, according to a new national survey.
“The majority of Americans believe that we are two-thirds of the way to being on the edge of civil war. That to me is a very pessimistic place,”
On a scale of 0 to 100, where “0” is there is no political division in the country and where “100” is political division on the edge of a civil war, where would you rank
the level of political division in the country?
and gets the average result “67”.
That headline is so misleading I think it discredits the entire site that posts it. “Most people think we have a lot of division” and “most people think we’re ON THE VERGE OF A CIVIL WAR” are incredibly different statements, that don’t belong in the same breath.
The polling firm itself has a reasonable reputation, so the data is presumably decent until proven guilty. But chalk this up as another indictment of uncritically repeating headlines.
Yeah honestly I only dug into it because the number just didn’t parse at all.
Thanks for the edit! I suggested that since I figured you weren’t intentionally sharing nonsense. Now I’m kinda curious though, if asked to speculate on why americans put that number so high, what people in this thread would have come up with to justify it…
I have a really hard time deciding what my own answer to that question would be, to be honest. If I parse it as “the probability of a civil war in the near future”, it’s low single digits (for the next 10 years I’d put it below 1%). If I parse it as “The level of disagreement between you and [charicatured outgroup member] compared to the level of disagreement between two sides in a civil war” then… Maybe I do put it above 50? I just don’t understand a quantitative scale running from “agreement” to “killing”, as if those share an axis.
If you have two axes, “level of disagreement” and “willingness to use violence”, then I think there’s a compelling argument that on the disagreement axis we are pretty far along, just by virtue of people on both sides often being entirely unwilling to even engage with the other side.
I think the way the question is asked basically forces its result, because it’s just so incoherent.
I don’t read that as being 67% of the way towards a civil war. The way I interpret the question is that a 0 would mean everyone in the country has the exact same political beliefs, a 50 means there are multiple beliefs, but people mostly work together and are generally willing to compromise, and a 100 is all-out civil war.
In this reading of the question, a score of 67 puts you 34% of the way to a civil war, which I think is still too high, but it’s not as unreasonable.
While we’re on the topic, just wanted to say I was starting to worry about future civil war a couple months ago (due to the podcast It Could Happen Here), but Scott’s book review of Secular Cycles made me think again. We may be at a divisive time, but in a broadly well-off economy I don’t see very many people wanting blood.
Sort of related, about a year ago Harper’s Magazine had a piece on a “progressive states’-rights strategy” for the tenth amendment titled: “Rebirth of a Nation
Can states’ rights save us from a second civil war?“ by Jonathan Taplin, much of which is standard anti-Trump/anti-conservative rhetoric, but I was reminded of similar “progressive state’s rights” essays during the Bush administration, it seems that whenever the other Party has the whip hand of the Federal government the opposition rediscovers the 10th amendment (similar to how the Filibuster is a good thing when one’s Party is the minority of the Senate).
On it’s facade the gist seems plausible as more self-governing States would seem to lead to less of a need to battle over the national government, but there was far less of a Federal government in 1860 just before an actual civil war.
As far as the chances of an actual shooting war I’m pretty doubtful, who would fight it?
If it isn’t from keyboards I just can’t imagine ir happening, there just aren’t enough warriors for a war.
Speaking only for myself, I think a civil war in the US is vanishingly unlikely if we define a civil war as being between two well-supplied military forces. I could be missing something, but I don’t see such a neat division inside the country, and I suppose I’m too used to living in the last superpower to imagine other countries giving military aid to one or both sides.
In the modern era one can think of a Core, Fringe, and Middle in the following sense:
Core: Wall Street, D.C., Hollywood, politics in developed countries, upper-level military in developed countries, cutting edge companies, Harvard.
Middle: Suburbs, Middle America, the working and middle class in the developed countries, everyday occupations.
Fringe: Developing countries, military expeditions to developing countries, survivalism and rustic living, black and grey markets, impoverished areas of developed countries, political ideologies and cultural traditions not popular with developed country elites, autocratic developing country politics, Antarctica.
In science fiction visions of the future, you find a similar Core/Fringe distinction:
Core: Ecumenopolis, interactions with highly intelligent AIs, high-level government and politics on the most developed and powerful planets, galaxy-spanning corporations, Earth as the Galaxy’s capital.
Fringe: Newly terraformed planets, underdeveloped planets, aliens resistant to the technology and culture of more powerful species, autocratic alien species, interactions with robots as menial laborers, expeditions to poorly-known parts of the Galaxy, war on distant planets.
Science fiction seems to have a bias towards the Fringe in its storytelling. Often the work starts with the character in the Core being “bored” with the Core and seeking out adventure in the Fringe, where the remainder of the story takes place. Or it starts with a character in the Fringe and features a chapter or two in the Core, where the Fringe-originating character feels uncomfortable and is happy to escape.
Imagine that many facets of industrial society were correctly predicted by pre-industrial writers, but that they focused mostly on the Fringe. A book would start in suburban America with the main protagonist being bored and seeking adventure in the Fringe, it not occurring to him to seek adventure in Hollywood or Wall Street. Corporations would often be seen but usually as an external force oppressing the protagonist, rarely would the protagonist be inside the corporation itself. Migration from the Core to the Fringe would be more common than the other way around. Portrayal of the hellish(to a pre-modern observer) density of modern manhattan would occur, but rarely as a permanent setting, with most of the action taking place relatively close to nature. It would make sense to portray the Fringe more often than the Core, because pre-modern readers would be able to better identify with the Fringe than the Core. But this very thing makes the Fringe less interesting than the Core.
What are some works of science fiction that portray a far future taking place mainly in the Core? The Age of Em(not a work of science fiction obviously) does a good job of staying firmly in the Core, speaking mostly to the experience of ems,(who will dominate the world) and not to the experience of humans,(who will be on the Fringe of the world) yet this very fact was the subject of complaints that it should have focused more on humans and less on ems, who readers were less able to identify with.
What are some works of science fiction that portray a far future taking place mainly in the Core?
Asimov’s “The Caves of Steel” comes immediately to mind; its sequels concentrate on the fringe. I believe the first _Foundation_ novel was also pure Core.
Not sure whether it’s really most of the movie, but The Phantom Menace should be mentioned for the prominence of Coruscant. Like with The Nybbler’s example of Trantor, it’s an ecumenopolis.
ETA: the distinction is not at all geographical, but Ada Palmer’s Terra Ignota series is very much Core.
I was helping my cousin with his math homework the other night and quickly realized that he had no idea how to do any of it. This kid is in special education, but his teachers don’t bother having him demonstrate proficiency in basic problems before moving on to the next step. So I spent the next thirty minutes ignoring the homework, trying to reduce the problem down to its most basic, hoping that if he understood that, he might be able to figure out the rest. But it didn’t work and I ended up coming up with a rote technique that he could use instead. It worked for some of the problems, but as soon as he got to something even slightly different, he was back to square one. I told his parents about it and they emailed his teachers, saying it was too difficult. After spending about an hour solving four problems, we gave up.
I realized that this exemplifies math education as a whole. I was always annoyed that they would teach us some technique, use it for a few examples, and then move on to the next thing, even if we had no understanding of what we were doing. But now I understand. They’re teaching to the lowest common denominator and it’s the only way to move on. Of course, those on the lower end aren’t really going to understand it but it doesn’t really matter. They are going to be confused in school, but once they graduate, they’re done with advanced math. My cousin isn’t going to college. He knows how pointless it is for him to be taught about algebra and geometry. Those on the higher end probably aren’t going to be hurt that much by this either. They have a better mathematical intuition and can figure it out for themselves. It’s those in the middle that are probably the most hurt. If someone tried to get them to understand the principles, then the techniques would be easier to pick up and retain. How many people are bad at math because no one bothered to explain what they were doing?
Math has the property that if you don’t actually learn lesson N, you will often be able to struggle and get by via memorization and plug-and-crank for lessons N+1 through N+K, but then somewhere later you will be really screwed by your lack of understanding of lesson N.
There’s a stark conflict here between the platonic ideal of making sure the student fully understands each concept before moving on to the next, and “you have 180 days, 24 students, and need to get through these topics by the end of the year.”
Right, but surely we can strike a better balance than what we do now. What’s the point of going through those topics by the end of the year if the students don’t get it? It’s even worse if the student only needs to a little bit more time to understand N, which makes it easier and faster for them to learn N+K. If they’re constantly backtracking, then not only are we farther from the platonic ideal, it’s just vastly inefficient.
That’s the future teacher’s problem, though. And, because school dates are set well before the teacher gets the student, they don’t have the flexibility to take “a little bit more time” to teach a particular student. Summer vacation is going to start at the same time, and the teacher’s going to be held to whether they taught all of the things they were taught.
A few iterations of this over a couple years, and you start seeing 5th grade teachers that can no longer paper over the built-in deficiencies baked in by prior teachers just passing somebody because they just barely understood.
I’ve wondered if we couldn’t improve outcomes by having teachers follow a cohort of students through a school, rather than having each teacher teach a particular grade. That is, in elementary school, a teacher would start with students in 1st grade, then when they move to second grade, the teacher would remain with them, all the way to 6th grade when they move to Junior High. Then, the teacher would pick up a new cohort of students in 1st grade.
It would reduce the incentive to just pass somebody if they only barely understand something (or don’t understand it, but can just be made to pass the test), because that deficiency doesn’t become somebody else’s problem. It’s just pissing into the wind from the teacher’s perspective to handwave missing fundamentals early on, because that’s going to come back to bite them in a couple years.
That is probably a nice system, but also probably relies on more knowledgeable/skilled teachers than we can count on having (because they now have to be able to teach 6 years worth of curriculum rather than just the same year over and over), as well as less turnover. I’m not sure you can count on the same person still being a teacher for 6 years, much less still being a teacher in the same place.
Only through Junior high, which would be somewhere in the 11-13 range rather than 16. Although that only helps a little.
The continuity of the relationship might also be helpful (although I could see where it would also cause problems…if a teacher decides they don’t like you and you’re stuck with that teacher for the next 5 years that’s a tough break).
> in elementary school, a teacher would start with students in 1st grade, then when they move to second grade, the teacher would remain with them, all the way to 6th grade when they move to Junior High. Then, the teacher would pick up a new cohort of students in 1st grade.
Is done in Waldorf schools.
The huge disadvantage is that the kid is stuck with a teacher in an extremely powerful position for (IIRC) 8 years.
No hope for a fresh start with a better teacher — who explains better, doesn’t hold subconscious prejudices, etc.
And that’s where the “24 students” becomes a problem. Because student A can learn topic X completely in three days and will be bored out of her skull if you spend longer than a week on it, student B needs a week to mostly get it and another week to really understand it, and student C will never do better than guessing the teacher’s password no matter how long you spend on the topic. Oh, and there’s also students D through W.
Even if you did only have to worry about one student, having a teacher follow them through the grades would only push the problem up a level. Now instead of having 180 days to teach the standard first grade curriculum, you have 900 days to teach the standard elementary school curriculum. (Or 12 years to teach the entire curriculum.)
More tracking/leveling earlier might be a partial solution to these issues, but it raises problems of its own. Even setting aside standardized testing, you wouldn’t want to doom a first grader who has trouble with subtraction to permanently be on the “slow track” for math, never able to catch up to his peers. And if you want switching between tracks to be possible, then each track has to cover the same general core curriculum anyways.
From a god’s-eye view, I think the obvious solution at the high school is to encourage greater specialization–students who don’t want to pursue a STEM career don’t need calculus; students who don’t want to be english professors don’t need literature analysis. Elementary education is a harder problem.
And if you want switching between tracks to be possible, then each track has to cover the same general core curriculum anyways.
Yes, but that is why the “slow” track is called slow. It’s more or less the same material, but taught at a slower pace, with more repetition, practice and less abstraction/more object level.
Then moving up a track merely means a slower progress in total than if you would have been on the fast track all along, assuming that the student is able to adapt to the faster pace & higher abstraction.
For example, top level is A1-A5. Lower level is B1-B5.
Then a student that follows the fast track entirely would spend 5 years at this stage of education: A1-A5. A switcher could spend 6 years for the same material: B1, B2, B3, A3, A4, A5.
And that’s where the “24 students” becomes a problem. (…) More tracking/leveling earlier might be a partial solution to these issues, but it raises problems of its own.
The main problem with having too many students in a classroom is disruption. If, for example, 1 in 20 students on average is the type that will keep yelling stupid things at their classmates during the lessons (or start playing loudly random YouTube videos during the computer science lessons), then statistically classes with 30 students are more likely to be disrupted than classes with 25 students, 20 students, 15 students. The situation is more about the greater likelihood of getting an extreme disruptor, than about increasing complexity of teaching greater amounts of non-disruptors. (Like, the class with 30 students can actually become quite okay for a few days when that one kid suddenly gets sick and stays at home. The mere difference between 30 and 29 students in the classroom is not enough to explain the magnitude of the change.)
The second greatest problem is having kids with different abilities and interests. Now this could in theory also happen in a class of 5 students, if you’d get one Math Olympiad winner, one quite good learner but not deep thinker, one bad learner who is mentally average but considers learning boring and had bad teachers at previous grades, one literally almost retarded kid (but only almost!; that’s why the kid is in your class), and then one weird kid with some combination of autism and schizophrenia who also happens not to speak English as his first language. Now go ahead, and prepare a lesson all of them could enjoy together.
(Again, a greater size of classroom makes it statistically more likely that something like this will happen.)
Yes, tracking/leveling raises all kinds of problems, but the ignorance of different abilities and attitudes solves nothing. No child is left behind, but also no child gets too far ahead of the last one, unless they also take private tutoring, in which case they mostly waste their time at your lessons.
So we should rather think in the direction of how to fix differential education, so that e.g. being slower temporarily does not create compounding permanent effects. Maybe make a difference between “slower learning (of the same content)” and “dumbed-down learning”? Perhaps we should make everyone learning at their own pace a norm, not an exception; even if that creates some logistical problems for the school. Like, instead of “being in the 4th grade” you could simply attend “level 5 math” and “level 4 chemistry” and “level 3 history”; and your classmate would have it the other way round.
The main problem with having too many students in a classroom is disruption. If, for example, 1 in 20 students on average is the type that will keep yelling stupid things
One thing that occurred to me, is that this isn’t the way it works. Some classes are very, very quiet, others are very loud, and my hunch is that this isn’t consistent with a random distribution of troublesomeness among students. Rather, disruption is to a large extent a function of the dynamic between students.
Schools could combat disruption by rearranging the classes, ideally with some understanding of the psychology involved, but at worst, just randomly breaking up disruptive milieus would probably help.
@Viliam @Ketil
Throughout my K-12 experience, distracting students were never really a problem, though stories I’ve heard suggest that they most certainly are in some classrooms. I suspect the probability and level of distraction are function of a number of things, but most importantly of whether the teacher can credibly threaten real punishments for any and every student who disrupts the class. If detentions aren’t enforceable, students with a tendency to misbehave will misbehave. If half the class doesn’t give a damn and misbehaves, the teacher is screwed if they can’t send half the class to the principle’s office. If getting sent to the principal has no real consequences, it ceases to be an effective threat.
The teacher’s job is also a lot easier if the vast majority of the students are predisposed not to be disruptive.
That kind of system sounds a lot like what I’d design as education czar. Abolish the “grade” system; have slow, standard, and fast tracks for each subjects. Switching from a slow track to a fast track probably entails going back slightly in the sequence to repeat some material because of desynchronization, but going at the faster pace from there on. Then possibly branch out into electives at the former “high school” level with prerequisites of a certain progression through the track.
Additionally, abolish the idea that every high school graduate ought to know calculus (as I suggested above). If you decide to stay on the slow track for a subject, you’re not going to get as far by the time you graduate, but you probably won’t need an in-depth understanding of that subject anyways.
Needless to say, this would not be possible to implement in the current (US) education system without sweeping reforms.
Additionally, abolish the idea that every high school graduate ought to know calculus (as I suggested above).
In 2018, only 19.3% of students graduating from high school in the US had taken calculus, and in fact, only 48% of high schools even offer calculus. Source: National Science Foundation. The idea that every high school student should take calculus is already well-and-truly abolished.
@littskad
Huh, guess I should be more careful about generalizing from my personal high school experience (public school in a relatively rich town where pretty much everyone was expected to go to college).
When I was in (a Quaker) elementary school, all our math was taught through Individually Prescribed Instruction (IPI). Each student took a placement test at the beginning of the year to assess their knowledge. based on the results we’d each individual would work through new concepts, and once we proved mastery, we’d move on to the next concept. This allowed us to all work at our own speed and level. Our work was all hand-graded. I imagine today with computers, this would work even better.
If I remember correctly, she would provide personal assistance, if a student had repeated difficulties in mastering the concepts. But I think it rarely happened. Mostly she was scoring how we did on the various skills and tests.
Hahahaha, man, I can’t even get college graduates to understand and approach things conceptually. This is not an easy mindset and most people cannot do it. Most people just need to get the rote mechanic and you’re lucky if they can perform simple problem-solving within their simple domains.
I actually just had a conversation about this over lunch with a coworker. Standard griping. Even at manager level, some managers struggle with this concept. I don’t think it becomes reliable until you start getting at senior manager level or above.
I think many schools, at all levels, are not teaching problem solving and critical thinking skills nearly as much as they used to, and may even be doing things to actively discourage these skills.
Among my friends/peers, the majority of college graduates don’t have the ability or presence of mind to think anything through critically. Those who have gone only as far as a high school education do much better in this regard. I’m not sure exactly why this is, but may have something to do with being faced with and exposed to certain realities of life sooner than college-bound students since schools are no longer emphasizing these skills.
Additionally, I have aunts and uncles who have been college professors for several decades and they have also noticed and discussed with me about their observation that many of today’s students lack many of the critical thinking skills that used to be much more common among college-level students.
Among my friends/peers, the majority of college students don’t have the ability or presence of mind to think anything through critically.
I feel like you might be overstating this. These people literally can’t think critically about things, or they just don’t think critically about the things you think they should be thinking critically about?
I am capable of thinking critically, but there are tons of things I don’t think critically about. Some of it is laziness, some of it is because I don’t feel the given thing is worth the effort, some of it is because I don’t have time to critically evaluate everything everywhere all the time. Some of it is probably unintentionally or at least partially unintentional critical evaluation of an area where I have a particular bias that would be challenged. Probably plenty of other reasons.
I never said they “literally can’t think critically about things” I said that the either don’t have the ability or presence of mind to think things through critically.
I don’t expect anyone to think about everything critically. I’m mostly referring to things that are important to them or things that they are required or need to do, whether that be in their home, work, or school lives. I’m talking about situations where they have a problem and solving it is their goal, but if the first solution they try does not work they never even consider a different method, much less try one.
I can’t get people to think critically, understand high-level goals, or comprehend processes in their own job functions. This isn’t me asking “so what do you think about the Many Worlds Theorem,” this is me asking “What would you say you DO here?” and them not understanding how they fit into a team, or how the overall process works.
This is typical for any staff position, not unusual for middle management, and usually not at all present in senior management. This, IMO, is one of the big divides between the Big Wigs and the peons.
I know a guy who works in a consultancy, and he frequently encounters entire teams of people who are unable to do a simple task like: “Describe, in two sentences, your role in the company”. People are unable to distinguish between the tasks they do, and the role they play.
“What would you say you DO here?” and them not understanding how they fit into a team, or how the overall process works.
What you focus on as the question asker may very well diverge from what’s salient to the answerer.
An awareness/focus on “teams” in particular (and social context in general, which often would include a focus on the inter-relatedness of the components of a process) are a facet of personality, not intelligence or analyticalness. For a more in-depth take on this, I recommend reading about the “products” of Guilford’s “Structure of Intellect”.
Be glad that you actually have a diverse employee body, because I can guarantee there are things that you are missing that they are picking up on.
(Aside: Given the political nature of promotions, it’s not surprising to me that the vast majority of exec-level people would be socially-contextually aware. This may actually be a warning sign, though, that there’s too little intellectual diversity in the higher-ranks, not a signal of the appropriateness of their ranking.)
(Additionally, depending on how low-level a person is, they may be effectively socially excluded from knowing the larger context of their work.)
I really don’t think there’s a trade-off. This isn’t social knowledge. Staff level who don’t understand how they fit in on a team don’t have a problem with social skills, and most have no problems understanding social nuance or managing relationships (otherwise they would have lost their jobs). They just don’t have the ability to understand and improve complex systems.
Our middle managers all started out at staff level and succeeded there before moving into management roles. Part of the reason they were promoted was because they can fix systems and understand how they fit in on a team. However, working in a factory is pretty cross-functional, so this isn’t analogous to all organizations. I’ve been in other companies that were heavily siloed and nepotistic, and middle managers did not always have this skill set.
That’s not to say that these people are not useful. They undoubtedly have specialized knowledge that other people do not have. However, they do not really grok “why am I doing this?” which means they cannot answer “how can I improve this.”
Part of the reason they were promoted was because they can fix systems and understand how they fit in on a team.
I would need pertinent examples of what you mean by “systems” and “understand how they fit in on a team”. Both of these imply the need for contextual awareness. As an example of what I’m getting at: My personal contextual awareness is not that good, but my implicative, cause-and-effect awareness is pretty darn great (at least on the job).
Obviously, not everyone is capable of the same level of contribution (even at the lower levels, some people are far more capable at various sorts of work that would be considered impossible tedious by most higher-ups). However I can’t tell whether you’re getting at that kind of division, or whether you’re genuinely focused on a matter of personality salience.
It’s possible your workplace really doesn’t need the other kinds of salience, so they aren’t recognized. Part of living in a society of non-integrated businesses allows each business to totally outsource various important things to its suppliers or customers.
I can give you some pertinent examples in a similar context to ADBG’s experience, though I can’t speak to how this shows up outside of manufacturing oriented organizations. I work in the engineering group for a large aerospace development program as a structures analyst for our avionics (aerospace-ese for electronics) group. My role & my day to day tasks, while related are very much not the same thing. The headline objective of my role is to provide an engineering assessment that the the avionics system will meet X life cycle with Y reliability.
What that looks like from a day to day perspective is some combination of a) gathering information on the proposed design (e.g. geometry, methods and materials of construction, expected environments, service requirements, etc.), b) performing finite element & classical analysis modelling, c) looping information from the results to inform the design (e.g. starting the whole thing over again, happens roughly every 3 months). However analysis of modern avionics systems is a tricky business, as modern electronic component design has advanced and we start talking about having tens of thousands of individual components on a given electronic board, many of which you only have sketchy details about the construction of gleaned from a manufacturer’s data sheet that’s properly aimed at providing the sparkies (electrical engineers) what they need to design the circuit. On top of that, the dynamos (dynamic loads) group is only guessing at what the right random vibration spectrum to be applied to your box is (and it changes every few months as the box location changes, the stucture it’s mounted to is redesigned, and the propulsion system turns out to have different characteristics then were guessed at the start of the program) and try not to think too hard about the shock environments because there really isn’t a great way to model their effects anyways. At the end of the sausage making, I produce a stress report, which covers the expected construction of the design, the environments, and the expected performance of the unit.
However, after spinnning up all of that work, nobody really trusts the analysis anyways, so the box gets sent to the environmental test group for shake & bake testing, where they’ll expose it environments enveloping those predicted for it’s useful life (usually with some fudge factors to cover for underpredictions on the loads and variability in the construction), and if it passes then the box is good to go and we’ll fly it. So if the only real arbiter of was the design mechanically sound enough to fly was the test results, why pay me $$$ to sit around and hold up the design process with constant requests for more time to do analysis? Well at the simplest level, because the company process manual says that all flight hardware must meet the requirements of the Structural Assesement Plan (SAP), which in turn requires that a structural assessment be performed to on all flight hardware (stepping up a level, it’s a company requirement because there’s a requirement in both the governement RFP that we are responding to & from several of the regulatory agencies that we have to be blessed by to fly). Go one level deeper and it’s a cost/benefit trade for our organization, because if my analysis can provide some insight into the mechanical performance of the box, we might be able to pass the shake & bake testing on the fifth attempt, instead of 43rd attempt like was done on one the boxes that was designed when the organization was still in startup mode rather than the mature aerospace prime approach it’s trying to take on the current (order of magnitude larger) project that I was hired on for (e.g. trade test dollars vs. engineering dollars).
That’s still a pretty simplistic view of what my role is though. It’s almost tautological, but when you do an engineering test, you only get the results that you tested for. That means that you can demonstrate one (or practically maybe a dozen) test objective pretty well, but means that you don’t really have much insight into what happens when you start to deviate from the test conditions (say a supplier of a certain specialized electronic kit decides to double their prices after shifts in the rare earth metals market make their previous price uneconomic and you want to switch to a different design). By anchoring analysis models to the test results and analzing the effects of a change (say you had to change a few electrical components and move a mount point a quarter inch), I can provide useful information about the magnitude of the impact of the change, which allows us to make a basis of similarity argument and avoid sending the design back into testing. Note that this is a bit of a dangerous game, and is where a lot of the engineering failures that make the news happen (I would at least in part attribute the engineering aspects of the MCAS problems that Boeing has been having to this part of the process). Wait a minute…..why do engineering failures happen here, and why when my role is “ensure x life w/ y reliability” suddenly a lot more vulnerable here? Well, this sort of thing generally comes up pretty late in the project, when meeting delivery schedule is king so is it because the engineering is rushed to paper over the problem and get the vehicle in the air? While I’ve felt pressure to get the job done, aerospace has a pretty strong safety culture so I do have the time to make sure I’ve done my work to a point that stratifies myself & the review process. Why then when a bunch of motivated, smart, experienced individuals spend a lot of time and effort do we still get it spectacularly wrong sometimes?
Time for another tautology, complex systems are complex. In order to fight this, we generally take a “cheese cloth” approach to safety, which is basically the idea that have several independent layers of checks if something is good or not so that even if one process check only catchs x% of the errors, once you’ve gone through three or four gates you’ve caught 99.999….% (adjusts need for the application) of the errors. Structural analysis, testing, manufacturing process, inspections, audits of all such all form some of the gates, where we can aggregate the collective knowledge of a very large, intellectually diverse, and experienced group of people. In the scenario above, while the analysis step is completed correctly, we just did so in a way that cut out the testing layer of cheese cloth completely, and generally some of the other layers as well (e.g review of the findings for other avionics boxes on the vehicle for common/integrated failure modes, late breaking changes mean that the tooling won’t be ready in time to build the first production vehicle and so it is built by non-standard processes, etc.). Therefore, my specific role is to meet the right balance of catching x% of errors, balanced against the cost/time it takes to so, the balancing of which is the key role of the program management team (and so while I make snarky comments about how poor the mechnanical design process was before they created my role at the company, it may have made sense for the project in question…..though I’m convinced they just hadn’t caught up to where they needed to be as an organization quickly enough and that was only true three or four projects back). That also says my role has a broader impact than it would appear from a look at the mechanism of →take design information → analyze design → adjust design and repeat as needed, as I have a role in defining & auditing the effectiveness of the process steps that come before an after me, particularly for things that have subtle effects on structural performance. For example, when building printed circuit board the electrical team will define what layers of the board carry electrical signals or power and will route traces of copper to accomplish these purposes. However, depending on component placement, this can leave large sections of a given layer without very much copper required to meet the electrical function of the board, so the electrical team stops caring about what happens in these regions. However, when we go to manufacturing, this causes problems because having large bare patches makes it difficult to control the assembly process, and so commericial fabrication houese will traditionally allow “thieving”, which is to say with X clearance to the customer defined traces, they can add copper to fill in the gaps and make the board manufucturing more reliable (e.g. lower scrap rate). For most applications, this isn’t a big deal, but when you stick the board in very nasty mechanical environments (say attached to an engine), the changes in board mass/stiffness can be signficant. Now this isn’t a problem for the early design, because your first board is manufactured by the same fabrication house that is going to build your immediate demand (and they have their own internal consistent guidelines for how much thieving to add, so your boards always come out pretty close, and while your analysis model never even considered this, you correlated it to the test results and made your predictions), but what happens when three years down the line when you switch suppliers and the amount of thieving they need to meet their scrap rate is different (because their manufacturing process different, and it’s not something they provide a lot of insight back to their customers about). This may have been traditionally handled in aerospace by a blanket ban on allowing any thieving at all, but that only survived as half remembered requirement from some veterans that were hired from other organizations and initially carried over because that’s the way it is done and we don’t have time to be worrying about it. Then a few years and programs down the line, someone looks at the requirements and asks the appropriate question of why are we doing this weird thing that all of our suppliers are telling us makes their life painful, slow, and expensive (which they are billing back to us for) and that none (or practically none, aerospace is way less than 1% of the demand for electronic hardware). Without a structures (or at least someone with the structures background) as part of the review team for that decision, it’s unlikely that anyone will have realized that it serves an important role in keeping the link between our “cheese cloth” layers of design, analysis, and testing anchored to the manufacturing & inspection processes.
Multiply this by the thousands of decisions large & small that get made in a typcial day on a development program, and an important part of the role of every part of the team becomes attending meetings & briefings & socializing the changes they are making because even though 99% of the time you think you’ve caught the implications at your level, you don’t know how they will spread, though this then has to be balanced against spending so much time in meetings you never get any of your own work done.
The long and short of it is that none of this stuff is obvious to the young college graduate engineer (and I even had the benefit of some special studies classes my program was trying out to address this gap), and it’s not something that’s born from experience, because while you can get really good at your own part of the picture that way, you don’t have any need to understand the cheesecloth to do your day to day job perfectly (e.g. I could do a full and complete analysis of every detail of the box design, make sure it had all the right requirements on the engineering documentation, and send it on it’s way), and yet never figure out how to improve the overall effectiveness of the organization because we really could have made change X if with mitigated it with tradeoff y and the benefits of x were greater than the costs of y, and even though I don’t know anything about x, I can provide the relevant information to the decision makes on y, even though that’s not something I need to do in my day to day job.
And as a further note, while my experience is within the engineering team at the prime level (so the the group where this level of salience is probably most useful), it has has practial implications throught the business process out to the tertiary and beyond sub-contractors. Sitting at the top of the food chain, we constantly get fed stories from the factory floor, from installers, inspectors, auditors, cleaning staff (infamous, as changes in dirtiness are pretty good indicators of knock-on effects), and beyond of “something just feels off”. Even if you aren’t aware enough of how the domino chain fans out from your role, having enough awareness of what your role is to not only notice that things are different (both for the better and worse) allows you to more effectively react to those changes by raising red flags and adjusting priors as information comes back that things really are supposed to be different this time. Building things is very much a team sport, and understanding what your role is beyond merely what your task is at the core of teamwork
as part of the review team for that decision, it’s unlikely that anyone will have realized that it serves an important role
This is a good reason to mandate getting feedback from all the downstream stakeholders in a decision.
I presume that, if any such feedback is mandatorily sought, it is generally sought at the manager level, and the workers are, at best, just asked what impact the decision will have on their job, without being made aware of what point in the process their job is.
Those sorts of expectations are likely greatly responsible for the “social exclusion” I mentioned earlier.
spending so much time in meetings you never get any of your own work done.
Yeah, when few enough people are employed it’s impossible to find time for cross-training, despite everyone believing that cross-department training is important. This is an issue whenever organizations seek to maximize profit-per-worker by minimizing number of workers.
So it’s no surprise when the people who are aware of the context of their job in the entire process are those who are more politically or socially inclined. They do it in their free-time at gatherings, or somehow convince the organization to pay for social events that allow them to learn in a manner most congenial to their natures.
Why are the organizations willing to pay for social events and social groups and not for cross-training? Perhaps because the higher-ups controlling the purse-strings are socially inclined themselves, and do a gut-based expense-justification check instead of an true value-added analysis? Perhaps because it’s easier to justify an expense if a lot of people gather together and ask for it, than if a lot of people individually ask for it?
and understanding what your role is beyond merely what your task is at the core of teamwork
Sure, but even if someone does understand this, it doesn’t mean that they would effectively communicate their understanding of it if it’s not something that’s salient to their personality.
For some folks, I think getting the rote mechanic down is a fine start, then you can build a conceptual understanding on how they already know how it works. Like, you can show some folks the magic black box known as the quadratic formula, and when they’ve used it a hundred times and know it in and out, show them how to derive it from completing the square. Other folks might be better served starting with completing the square and then handing them a formula afterward.
I was really well served by the former in my math classes, which makes me a little skeptical of moves toward purely conceptual approaches. Like I totally want a conceptual understanding eventually, believe me, but when I get that first I tend to have a hard time translating it to the math, and soon it’s lost and was a waste.
“Approaching things conceptually” is too strong for what I mean. I don’t expect high schoolers to be philosophers of math, understanding the axioms thoroughly. Those concepts usually fly over my head. I mean just a very basic understanding of what something is before moving on.
Here’s an example where I think the schools do it right. Before children learn multiplication, they have to learn addition. Then, when they learn about “2 times 4”, they have to add 2 together 4 times. You can see them count it out with their fingers. I don’t think eight year olds really grok why, but on a basic level, they have it right. Then after they do that, the teacher will make them memorize the times tables because it’s faster than trying to figure it out every time.(This is how I learned anyways.) At some point in their education, math becomes this thing where it’s like kids are taught to memorize “6 times 7 equals 42” even though basic addition was glossed over in one lesson. They forget what “8 times 9” is, and don’t know how to figure it out.
One of my stepdaughters has the problem that she has been trained by her math teachers to guess the password on her math work. She learns the rote procedures that the teachers lay out for her but understands almost none of it. I try to help her understand what’s actually going on but she is very resistant to getting out of ‘we were given this procedure so this is what I have to do’
I believe her learning process was broken by a really bad teacher of math she had in the 6th grade when I was just her mom’s boyfriend. This teacher would often assign homework that was impossible – not ‘too hard for 6th graders’ but ‘not enough information to solve the problem’. The same teacher (and I guess the whole school) assigned online homework where the problem would be presented, you would give an answer, and you would get immediate right/wrong feedback (either with the answer after every problem or right/wrong now and all the answers at the end). Often the ‘correct’ answers were wrong and there was no way to get them right.
One assignment reported that 1.5 was wrong – immediate feedback that she was doing it incorrectly but at the end of the assignment she ‘learned’ that the correct answer was 1.50. Nowhere on the assignment did it say how many decimals to include.
Another assignment flipped the form of the required answer apparently randomly. Problem 10 required a decimal to 2 places, problem 11 required an improper fraction, problem 12 required a mixed fraction, etc, problem 13 another mixed number, etc. No indication in any problem’s wording of how to present the answer. I found her in tears while working problem 8, having been told over and over she was ‘wrong’. I helped her with the rest, and of course things got worse in her mind as we kept getting more of them ‘wrong’, over and over. We missed 13 of 20, together, but once we got to the end of the assignment and saw the answers, we saw that she had legitimately only missed 2 of the first 8, and of the final 12 ‘we’ had only missed one more. I wrote an impolite email to her incompetent and lazy teacher about this homework that her mother wouldn’t let me send. The problem only existed because the teacher didn’t look at the online work at all before assigning it. It would have taken her 5 minutes to do it herself and see the problem before subjecting her students to it. It was 20 problems of 6th grade math and she’s a 6th grade math teacher! Anyways, the girl decided that year that she hated math and was no good at it. Too bad, she’s a smart kid, really.
My other stepdaughter never had that teacher and is somewhat more gifted at Math and is proud of how good she is at it. Unfortunately, this year she has a teacher as a freshman in HS who is apparently not good at math? The teacher gets hostile when she is asked a question and can’t really explain anything, apparently. At least she’s not doing damage like the other one did.
The really bad news is that neither of these two is the worst teacher my daughter’s have had! That ‘honor’ belongs to a 7th grade English teacher who accused the younger daughter of cheating (actually she accused her of plagiarism, and even gave the definition of plagiarism) on a 5 paragraph essay. The actual accusation is that my daughter and another girl worked on the essay together (they both claim they were told they could do so by the teacher) NOT that they plagiarized another source – writing it together was apparently plagiarizing each other, in the teacher’s view. Anyway, this teacher was fired because she didn’t really teach, she just showed up and handed out worksheets and played on her phone, or didn’t show up at all. I think the primary reason they were able to fire her is because the other teachers had to sit in her abandoned class during their teacher’s prep period while she was off doing whatever. The other teacher’s wouldn’t really go to bat for her. But they didn’t fire her until the year was over, so it was pretty much wasted for my daughter.
TL/DR – the state of teaching is probably worse than you think
Every district/state is different, but I have one kid in special ed and one not, and the teaching methods are very different.
Common Core math tries to help this problem by teaching you a bunch of different (admittedly rote) ways to do each problem (e.g. multi-digit multiplication by adding areas, breaking down into 1000s/100s/…, etc.). This can help get at the underlying concepts.
It gets constant whining from parents because the kids aren’t using the same methods the parents were taught, so the parents don’t know how to help the kids.
Like I said to A Definite Beta Guy, it’s not the “non-conceptual thinking” part I really have a problem with. It’s the lack of foundations before moving on. I didn’t have a conceptual understanding of “22 times 28” as a twelve year old, but I knew how to solve the answer. And if you threw in extremely large numbers, negative numbers and decimals, I didn’t go in to a catatonic state.
sorry…”borrowing” isn’t the standard, simple way to do subtraction!?
In my elementary school curriculum, that was the method that seemed like the holy grail of “oh this is how it’s actually done” compared to…whatever lattices or such nonsense they were teaching us in parallel. But….of course you can just add the carry to the *checks Wikipedia* subtrahend one place over instead of crossing out and rewriting a digit of the minuend one smaller. That video may have just sped up my pencil-and-paper arithmetic if I can get the hang of the old technique.
That sounds somewhat like the system used in my kids’ elementary school 20 years ago. It was called Chicago Math by the teachers, and the process was that they would go over math skills very quickly and then move on to something else. Later they would come to the same subject again and cover it a second time. And this cycle would continue, with them covering every topic quickly but cycling back many times to cover it again. Maybe the teachers didn’t explain it well to me, but it never made sense to me, and neither of my kids learned math very well.
I looked up Chicago Math in the Internet and I get something else. The way education trends come and go, it could well have changed by now. But I wonder if the philosophy of your cousin’s math teacher is similar. But it still doesn’t make sense to me.
When planning classes, it’s sometimes made sense to me to do things this way for certain topics, but I shudder at the thought of just doing this for every single topic in a class.
The idea with coming back around to something is that you begin by teaching the basics and how to handle simple cases, and then later on you come back and cover the thing in more detail. You can do this several times. I see three reasons why you might – when the reasons apply – do things in that order rather than just covering the topic once and for all:
1. Maybe once you teach the basics, you’ll keep using those basics as elements of every other thing you do. Say, you teach the multiplication table up to 5×5, then do a unit on word problems where you only deal with small multiplication tables, then you go back and teach a bigger multiplication table. Then all the time between coming back to the topic still helps reinforce it.
2. Maybe the topic involves a lot of memorization and if you try to make the students do it all at once, they’ll just get everything mixed up. I think language classes try to alternate vocabulary-heavy and grammar-heavy topics for this reason.
3. Maybe in between round 1 and round 2 of this topic, you learn a different tool that helps you deal with the advanced cases, but it didn’t make sense to teach that tool first. In other words, the topics in the class you’re teaching all depend on each other in subtle ways, and you can’t just cover one of them in a go.
Nice reply, Kindly. Your ideas make sense. It doesn’t sound like quite the same thing as I understood Chicago Math, though. You say you do the multiplication tables only up to 5X5, and then do word problems on just those. I think that makes a lot of sense. But my understanding is that the teachers would instead teach all the multiplication tables, but too fast for anyone to memorize. But return back to it, thinking that the kids would get it if they keep coming back.
I like your method; I hope I was just misunderstanding the teachers. But again my kids didn’t learn math too well, so it didn’t seem to work. It is possible my expectations were too high. I have very good math skills and my kids are adopted, so not my genes. But I don’t think that’s it.
Looking at spending over the long term will show that the increase since 2010 is at a lower rate than the increase before, which is doubly impressive since many more people are now covered and they’re sicker on average than those who already were; see e.g. Wikipedia. This is talking specifically about patient-paid deductibles, which were increased, as far as I can tell, out of the utterly unsubstantiated belief that patient demand has anything significant to do with medically-unnecessary testing. This is the source of the obviously biased ‘Cadillac Plan’ and ‘moral hazard’ terms from the ACA debates – it’s also completely fixable in statute; write your congressperson.
The idea that paying for your own care should bring the prices down absolutely has merit, where small/individual buyers are the majority of the market. If the main buyers are still the large large payers, then all this high deductible stuff won’t move any needles. I don’t think “high deductible plans didn’t work as currently designed” is sufficient argument for “AHA! the market solution for healthcare was tried and FAILED! Medicare for All now!”
(I know that’s not what you’re saying; but a lot of people do….)
The idea that paying for your own care should bring the prices down absolutely has merit, where small/individual buyers are the majority of the market.
Only if you actually can shop around. I had a very minor procedure recently and put a decent amount of effort into the question of cost. I literally could not get either the facility, or the insurance company to tell me how much it would cost, either me, or the insurance company.
ETA: In advance, I mean, they were very happy to tell me how much it cost afterwards. To be fair, the nurse on duty was willing to give me a ballpark estimate, which was within a couple hundred dollars of accurate, but she was very clear it was just a guess.
Even without insurance, you pretty much always have to agree in advance to unlimited financial liability for whatever amount they decide to bill you. This is actually worse without insurance because when you’re in-network there’s a set rate (though you usually have no say in what codes are billed and the like).
I don’t know what to call that but “market” does not seem applicable.
Assuming that quantum computers keep improving, what areas of science and technology will have advanced thanks to it in 20 years? How exactly will things be different?
@Lambert
Why protein folding? Classical dynamics is extremely good for modeling protein folding. (That’s how molecular dynamics simulations work.) Like most other computational tasks, macromolecular dynamics seems like a great example of something that shouldn’t be helped by quantum computing.
I’ll take back my comment, since quantum annealing should be applicable to minimization problems like protein folding. I’m still skeptical that this will actually be relevant, however.
Correct me if I’m wrong, but only certain types of mathematical problems are more easily resolved via quantum methods, right? Only problems that can be usefully resolved via quantum algorithms. AFAIK, only factoring large numbers and searching unstructured databases are better accomplished via quantum algorithms than by conventional means.
I strongly suspect that when they’re actually available, we will see a lot more applications for quantum computers. Right now, cryptography is a place where we know two very generic algorithms that will be able to be used to attack a bunch of stuff we care about in the future, and also where we have good reasons to worry about quantum computers that come into existence in the future–if NSA records all the public-key encrypted traffic today, they can crack it when they get their quantum computer running.
I have the suspicion that computational sciences that rely heavily on algorithms, such as genetic analysis, will also have the potential to advance drastically
I think 20 years is too optimistic. 30 years seems more realistic. Not that I’m a researcher in the field or anything, but I do keep up with the papers, and I don’t think we’re within 20 years of a quantum computer that revolutionizes any of the sciences.
That said…
Perhaps this is under ‘quantum simulation’, but I think the biggest impact might be designing and simulating novel materials which abuse quantum mechanics for fun and profit. By random trial and error we’ve only scratched the surface of what materials are physics legal.
You have been allowed the budget and legal authority to create a self-governing zone called “Heaven”.
-What are its distinguishing features?
-What is the governmental form?
-What are your entry requirements (immigration policy)?
-Will you charge a fee?
-How do you deal with unrest and crime (personally, it would be zero-tolerance cast out if time in “purgatory” in hearing the case results in conviction)?
-What does your budget pie look like (even if funds are unlimited, spending is not, and can be apportioned)?
“Heaven” is an artificial island like Dubai’s Palm Islands, populated solely by a fictitious religion of Heavenites. Their tithe goes toward a sovereign wealth fund, UBI for islanders and expansion of the island.
Distinguishing features: Life on the island is utopian – in that we have machines and cheap imported labour do all the hard or unpleasant work. Everyone has a ‘vocation’ that they spend time doing but it might be a sport, game or creative/artistic/scientific venture. Everyone gets a set income (100K, maybe less) from the wealth fund that they can use to bring in food or resources for their group/personal vocation and lifestyle (but no drugs). If you want more, you have to take it up with the dictator as to why you deserve special consideration.
Government form: Dictatorship with succession by appointment of the previous dictator. The dictator has wide-ranging power but hopefully shouldn’t abuse it. Pressure of community expectations should incentivise competence, transparency and thrift. Unlike in a state of a hundred million, the dictator personally knows the people he’s ruling, judging and administrating. Any corruption is stealing from discrete people, not ‘the treasury’. Besides, he can hardly get away with conspicuous consumption in a tiny island.
Entry requirements: Must be recommended by several other Heavenites and have a track record of virtue.
You’ve been paying 10% of your income to the church as condition of membership anyway, but if you get a ticket to the island you have to sell your house and car and so on and contribute it to the wealth fund.
There shouldn’t be any unrest and crime since we’ve cherry-picked wealthy and virtuous people and they should share very close communal bonds, being part of the same religion. Not more than 5,000 people should be on the island anyway. If there is serious crime, the offender should be exiled from the island and the religion. Should be a good deterrent, since they’ve sold all their worldly belongings as condition of entry.
Budget is hazy. If population is 4500 + 500 servants, then personnel might cost 450,000,000 + 10,000,000 in labour costs alone. There’s probably some hydroponics on the island, solar power and money might be made in intellectual property but this will not come close to breaking even. Maybe another 20,000,000 in maintenance, power, utilities and resources? I don’t know how much artificial islands cost. 500 million a year is my estimated budget, including security and expansion. That would require a sovereign wealth fund of 10-20 billion, depending upon rate of return. Apparently, the Mormons have 20-25 billion so that isn’t too unattainable.
Let me preface this by acknowledging that this isn’t really an answer to your question, but I figure that it’s close enough that you’d want to see it.
With a name like “Heaven” the first thing that comes to mind is a rich people enclave like the titular Elysium in the 2013 Matt Damon movie. In that situation I’m picturing a megacorp-run community where the overwhelming majority of residents are its employees. For bonus points, the company would be called something circuitous like “Exilis Interpersonal Management Coordination Services.”
I don’t know what would be the core business of the company, but it would also have subsidiaries that would offer all of the services in the community: security, maintenance, sanitation, recreation and so on. The employees of the main corporation would be offered simple sleeping quarters and basic amenities for free in the residential area, eating in cafeterias and having access to gyms, gardens, and some entertainment resources. Those who chose to do so could pay out of pocket to move to better quarters, all the way up to small penthouses and houses with tiny (mostly symbolic) lawns –the epitome of luxury. The employees of the subsidiaries also receive free housing and amenities, but their quarters are of lower quality and further to the edge of the complex. Lastly there are the non-employees which enter the complex to sell goods in the market areas (assuming they have permits) or who work for companies that rent store space inside the complex. This last category lives outside the complex and procure their own housing and necessities.
Security is enforced by the security department, who carry out their duties professionally and sometimes very enthusiastically (mostly when dealing with non-employees). All public areas are watched by CCTV to ensure the safety of the employees and their compliance to the rules and guidelines. These rules and guidelines cover everything from inter-employee aggression and defacement of corporate property to dress and grooming standards. Fortunately, adequate and guideline-compliant clothing is provided at a low cost by stores in the shopping areas. Failure to comply with rules and guidelines incurs penalties that officially vary from verbal warnings to termination of employment. This isn’t an euphemism for murder, it merely means the loss of one’s job as well as their living quarters, access to all of the services in the community, and likely contact with their friends and loved ones.
Admittance into this community is the same as the hiring process in any major company. The HR department ensures that vacancies are quickly filled with persons of the necessary skills and moral flexibility. Corporate culture is welcoming and tight-knit, but becomes more and more cut-throat the higher ones goes up the ladder. The work space is very hierarchical, but the whole complex is technically work space, so you can imagine how that works. The HR department mediates interactions between employees, always making sure that their decisions are fair and what is best for the company. Employees occasionally leave the complex on their down time and venture into the non-corporate lands beyond, where they can forget their strictly regimented lives and engage in drug use, prostitution and other actions against the rules and guidelines. The legal department shields the employees from possible consequences of their out-complex actions that might try to find them, but if the severity of the backlash or its frequency becomes a liability, the employee will likely face termination. Any rumors about high management hunting down for sport former employees in the out-complex slums is pure hearsay.
The budget is drawn up by the highly skilled and exceptionally miserly finances department, which aims to provide employees with the most productivity-enhancing lifestyle they can according to guidelines from above. This means ensuring that everyone has access to adequate nutrition, basic healthcare, regular sleep, hygiene, and enriching environments that reduce stereotypy and self-mutilation. A “friendly” competitive culture between sectors and departments to achieve the lowest possible costs ensures maximum economic efficiency.
There actually quite a number of places called Heaven or Shangri-la or Xanadu, etc.
Besides small towns, a lot of them are tourist traps.
Generally beautiful, scenic tourist traps, but tourist traps nonetheless.
So I could think of it as some kind of “experience” tourism. Maybe it’ll be like Disneyland but the various interpretations of heaven in the religions and mythologies instead of jungle place, western place, future place. All fake, capitalized, and somewhat hedonistic, but still fun and entertaining.
Not a place I would care about organizing or governing though.
Congratulations! You have just been appointed CEO of a large health insurance corporation, ExampleCare. You were selected due to your bold new ideas for improving member experiences and health outcomes while keeping our costs about where they are now – the Board will accept increased spending temporarily, but it has to pay off in similarly decreased costs down the line, say within five years. You have complete authority to change ExampleCare’s internal processes, covered services, public health and socioeconomic initiatives, etc. within the bounds of law. We offer a variety of insurance plans, ranging from ACA marketplace plans to Medicare Advantage options and state Medicaid subcontracts.
It’s the evening after your appointment; the champagne has been poured, the cigars have been lit, and the Board is waiting for your speech. What new and exciting changes will ExampleCare be making?
You know that thing TV shows do where the villain of season 1 becomes a hero in season 2?
I’m selling ExampleCare to Private Equity. The way to improve health outcomes and reduce costs is to burn the whole parasitical health insurance industry to the ground.
I’m working out how to partner with a discount airline and several foreign clinics to outsource as much of our patients’ optional surgeries and pharmaceutical purchases outside the US. We’ll cover that knee replacement at 70% here in the US, or take a flight to Mexico where our partnered orthopedic clinic there will do it for a lot less, and we’ll cover it at 100%. Members get cheap flights for short trips to various foreign destinations (Canada, Mexico, various Caribbean countries) as a benefit, and while we can’t formally tell you to buy your drugs there, your drug benefit covers 100% of costs there but only 50% in the US. Oh, and there’s an entirely informal website that lets you get someone else to pick your drugs up from Canada/Mexico/wherever for a small fee. We also are introducing an advanced telemedicine system where you go to a local clinic and are seen by a nurse/medical assistant, and then consult with a doctor in some lower-cost place over Skype. Add in some kind of on-call doctor in case you need to be seen hands-on by a doctor.
Basically, the US medical system is in an unfixable cost spiral and has bound-in costs that can’t be lowered much, so let’s just do a damned end-run around it.
Outsourcing health care to other countries would plausibly work, but it doesn’t meet the original poster’s requirement that you plan be within the bounds of current law, which has lots of requirements about local provider network adequacy and maximum out-of-pocket expenses.
Would that actually be legal? It’s one thing to offer equivalent coverage when travelling abroad. More interestingly, if US health insurance *doesn’t* cover repatriation, why don’t they jump at the opportunity to cover medical care everywhere else? My current health insurance only covers me abroad if travelling on company business which I don’t understand.
I was wondering the same thing. I wouldn’t be surprised if there were something in the legislation concerning medical insurance that required services paid for to be provided by formally qualified (US) health care professionals. That would torpedo a plan to source medical services across the border.
I’m not sure that it can be done, particularly with an established organization, full of people who don’t want to change. But even without that, many of my worst interactions with health insurers have clearly been the result of money saving attempts on their part. (The harder it is to get an appeal considered, and the more often perfectly valid claims are randomly denied, the better for their bottom line….)
I think an insurance company that could eliminate surprise bills and force a pre-agreed price to the customer and a single bill at the end of the treatment for all medical services would be a huge win in terms of user experience. Medical billing is optimized for fraud.
HMOs delivered, at worst, care that was within statistical noise of other offerings. And at cheaper cost.
People hated the experience, though.
This is why I am always nervous about any new health care system proposals. I think there are lots of ways to make the system better along the obvious metrics, like outcomes and cost. But people hate being told there is some care they could get but The Man won’t pay for it, whoever The Man is. So we will get halfway through implementing the new plan, people will realize that they hate some significant unpopular part of it, and then the politicians will take out that significant unpopular part — except that the unpopular part wasn’t in there for shits and giggles; it was actually necessary. And then we staple another kludge onto the kludgocracy.
I think the issue with something like Kaiser is that it works great, as long as you are generally healthy. I have Kaiser and love it because it is cheap and efficient. However, I also know that if I ever have an uncommon health issue that I will be mostly screwed, and would have to pay a ton out of pocket to get to see a real specialist. Since I’m generally healthy and Kaiser is excellent on common issues, I feel like it’s worth the risk. As I get older though, I will probably look for something else,
We immediately partner with several teaching hospitals in our coverage area to fund double the amount of residency slots. With the new residency slots, we bake ongoing contracts in to the residency agreements that they will continue to work for us after residency for a certain period of time for below-market rates. Then, we offer care to our policyholders at a discounted rate at the facilities which these discounted doctors will work. Since these are marginal doctors, who would not have qualified for a residency program without our new slots, they may not be the best, but will should be willing to work for less than current doctors. Also, increasing the supply of doctors will create downward pressure on doctor compensation across the board.
To address the problem at the other end, we will begin offering end-of-life payouts to any customer that contracts a potentially fatal condition in lieu of treatment. Payments will begin at 25% of expected expenditure on the patient, and be negotiable up to 75% of expected expenditure. For example, if a patient contracts a cancer that we expect will cost $1 million to treat, they will be offered a $250-$750k lump sum payment to not have the cancer treatments be billable to our insurance. This will be marketed as power for the customer to choose their own terms for end of life, showing lots of elderly people taking one last elaborate vacation before they pass, instead of spending their last days in a hospital.
That’s why we need to hire the best marketing team out there. It’s not indentured service, it’s investing millions in the next generation of doctors. It’s not death bribes, it’s allowing our customers to live life to the fullest while they can. With good enough spin the AMA will be thanking us.
To address the problem at the other end, we will begin offering end-of-life payouts to any customer that contracts a potentially fatal condition in lieu of treatment. Payments will begin at 25% of expected expenditure on the patient, and be negotiable up to 75% of expected expenditure. For example, if a patient contracts a cancer that we expect will cost $1 million to treat, they will be offered a $250-$750k lump sum payment to not have the cancer treatments be billable to our insurance. This will be marketed as power for the customer to choose their own terms for end of life, showing lots of elderly people taking one last elaborate vacation before they pass, instead of spending their last days in a hospital.
This is a brilliant idea that will proceed to have hilarious unintended consequences when some clever patient realises that since insurance cannot deny coverage for pre-existing conditions, they should take the lump sum, and immediately switch to a different insurer who will be obligated to treat them. I expect the knock-on effects of this will culminate in Congress making this practice illegal if the courts don’t do it first. Also expect some very expensive lawsuits from the relatives of people who took the lump sum option then months later realise they really didn’t want to die and desperately started seeking treatment at a point where it was much too late. There’s going to be lots of sob stories along those lines blasted all over the media and court rooms.
We can use this to our advantage (until congress outlaws it). Make the payout tied to a loyalty metric, like 2% of expected cost per year you have been a customer. Charge customers an additional premium, since they know they may have the option to take the buyout, and reap increased income from premiums until we actually do have to pay out, and then we pay out at a fraction of what our true cost would be, and our competitors take a hit as well (at least until the competition realizes what we are doing and copies it….maybe we can get a business methods patent on the idea and be the only ones to benefit from it until it is eventually outlawed)
I’d pour all my resources into finding a way to distill and bottle “judgment based medicine” and not “rules-based medicine” and sell it. And then I’d branch out to teaching and hosts of other areas where some unknown force is trying to destroy all human judgment in favor of rules, rules, rules.
Echoing Gobbobobble, doing what I can to burn it to the ground and start from scratch, or lobbying super-hard to truly liberalize this market – i.e. not Center for MEdicare and Medicaid Services making a tiny pilot of “innovation under these 20 conditions”. The rules are so many, so conflicting, so contradictory, so cumbersome; calling it a “market” is laughable; the overhead is so big it seems impossible to quantify. The field is stacked for Example-corp to become evil or fail, or remain niche and small – however noble its intentions.
We’re going to invest in lobbying via supporting our own set of politicians which will ape the Party lines, but with the addition of being in our pockets. Apparently that sort of spending is very cost effective and that’s after all the group which will most determine our profits because of the current level of State and Federal health insurance regulation.
Alternately, we’ll go ahead and lead an industry-wide lobbying cartel, but the preference is, if we can afford it, for them to be in our company’s pocket specifically.
The blackface discussion below raises a kind of interesting meta-issue that seems appropriate for a CW-permitted thread: When some action or symbol or word choice is decried as offensive, the speaker doesn’t just mean “this offends me” or “this offends many people,” he generally means “you should also be offended on behalf of the people offended by this.” So if most blacks find blackface offensive, but most whites don’t really care, then the argument is that whites should take offense on behalf of blacks.
Deciding whose offense should be taken up by others, whose should be ignored, and whose should be silenced or attacked as itself offensive is where the action is, here. There often seems to be a notion that we’re obliged to take offense on behalf of some groups or individuals. Some examples:
a. PZ Myers’ treatment of consecrated host was quite offensive to a lot of Catholics–should everyone have joined in on our offense? Were non-Catholics obliged to join in on our offense?
b. There have been occasional jackass publicity-seekers who announced their intention to, say, tear up a Koran on TV or something, offending the hell out of a lot of Muslims. Are non-Muslims obliged to join in on their offense?
c. Many people with traditional values are offended by open homosexuality. Lots of people are offended or at least made very uncomfortable by trans people. Are the rest of us obliged to take offense on their behalf?
d. Various cartoonists have drawn pictures of Mohammed, which offends some Muslims. Are we obliged to take offense on their behalf?
e. Some people find some previously-widely-used terms offensive, like “Oriental,” “colored,” “blind,” “retarded,” etc. Are we obliged to take offense on their behalf, when those words are used?
f. Some people are offended when aspects of their cultural heritage are used by others (or used casually by others). Should the rest of us join in on their offense?
g. Some people find the names of sports teams (Redskins, Indians, etc.) offensive and demand they change. Are the rest of us obliged to also find those things offensive.
h. Some people find interracial couples to be really offensive and upsetting. Should the rest of us join in and demand that interracial couples not offend those folks with their presence/visibility?
i. Some people find disrespect toward the flag very offensive. Are the rest of us obliged to go along?
and so on.
A huge number of public outrage-fests and think-pieces seem to follow this pattern. The argument is over whose offense matters and whose doesn’t. The argument seems to be that we all are obliged to adopt a kind of transitive offense from some groups, but definitely not others. The people who are offended by being called “Oriental” instead of “Asian” deserve our support; the ones offended by gay or interracial couples appearing in public deserve our scorn. Either kneeling during the National Anthem or standing during it is offensive, and we just need to decide which one.
I’m not convinced that transitively taking offense makes the world a better place very often. It’s mixed in with the normal social expectation of politeness, though–if you’re going around offending people all the time intentionally, I’m probably going to think you’re an asshole and treat you accordingly. OTOH, there are definitely people who take offense as a strategy, and others who are very sensitive about issues I don’t think are really all that important. And taking offense in public is also a way of getting a lot of attention, which is why we get a lot of it at present.
I have a lot of thoughts on this but I’m not really sure how to put them down coherently. While I mull it over, I’ll ask this: Is “blind” actually considered offensive now?
I’ve seen people who seem to think it is, but most people don’t. It’s probably more like “black” vs “African-American,” where there was a push to change the acceptable term but it never really got much traction.
My memory only goes back to the 90s, but “black people” is entirely inoffensive and the default term outside of professional-speak. “African-American” probably got outvoted for being too verbose.
To me “black” implies skin color (with some caveats, Beyonce is ‘black’ despite her shade), while ‘African-American’ implies someone ‘black’ who’s ancestors have been on this continent for generations, so both Barack and Michelle Obama are black, but only Michelle is African-American (though arguably he is by marriage).
Ironically, I use the terms the opposite way. Idris Elba is black, but he’s not African-American. Elon Musk is African-American, but he’s not black. Barack Obama is both African-American and black, Michelle Obama is black but not African-American.
@mdet,
I get that, it’s putting the “African” as where the family recently migrated from, but when the specific nation on the continent is known (Kenya) and recent then “Kenyan-American” would be used instead (though actually that would be his Dad, with Barack being a “mixed Kenyan and Kansan” with “African-American” being used for those who’s immigrant ancestors are very far back.
This also goes for “European-Americans”, someone who has both parents born in the same other nation (i.e. Ireland) would be “Irish-American, but add an Italian parent as well (a common mix) and then the general continent is used
I do realize that “Asian-American” is commonly used even when the specific nation ancestry is known, but what you gonna do?
Really I wanted a way to distinguish say Kamala Harris (Indian and Jamaican-American) from say Corey Booker (but even that doesn’t quite fit, most ‘African-Americans’ have some European ancestors as well, and it seems most who have American great-great-grandparents verbally claim some American Indian ancestry as well, whether black or white.
I know at work I’m referred to as “one of the Irish guys” though my ancestors are decidedly more mixed.
PZ Myers can be extremely offensive towards christians because his side has the cultural power and christians are not only “not protected” by the left, they are actively oppressed. He would never do that towards muslims, which are a protected class of the left. The ones that attack christians and muslims (e.g., Sam Harris, Richard Dawkins, Christopher Hitchens) get rebuked strongly.
Meanwhile saying stuff like “men arent women” can get you banned for life from twitter. There is no other logic for what is allowable or not in public life. Offend the left, get destroyed, offend the right, get promoted.
By enforcing leftwing taboos and laughing at rightwing taboos, the left is just demonstrating what it can do.
PZ Myers can be extremely offensive towards christians because his side has the cultural power and christians are not only “not protected” by the left, they are actively oppressed. He would never do that towards muslims, which are a protected class of the left.
That’s not true, though. The infamous communion host incident also included desecrating some pages from the Koran.
Just for the record, I’d have considered him an asshole for the Koran thing even if he’d never done anything with consecrated hosts. I’m not offended by it, exactly, I just think doing it with the goal of getting attention and offending people marks you out as an asshole.
To differentiate between two offensive incidents, I think there’s a big difference between Koran burning and Mohammed drawing.
In our culture, we use cartoons as a way of communicating pithy points. Yes, those points can be offensive, but the act itself isn’t. Our country itself, it’s founders, other religious leaders, present day leaders, they’re all fair game.
On the other hand, we use book burning as a way of declaring anathema.
It’s a difference of saying “There’s a problem with X” versus “X has no place here.”
Muslims who are offended by the latter are trying to get equality (on this point, at least). But Muslims who are offended by the former are trying to get special treatment.
There’s a case to be made for “respect each group with the form that is meaningful to them” but it’s definitely more of a demand than “don’t revile us in particular.”
@albatross11 Oh, I agree. I don’t like any of the “new atheists,” but IMO Myers is easily the worst of a bad lot, and the stunt in question is a good illustration of why. It had no intellectual point, nor was there anything interesting about it. Its sole purpose was to hurt religious believers and laugh at them for being hurt.
Muslims who are offended by the latter are trying to get equality (on this point, at least). But Muslims who are offended by the former are trying to get special treatment.
There’s a case to be made for “respect each group with the form that is meaningful to them” but it’s definitely more of a demand than “don’t revile us in particular.”
This reminds me of something I don’t think I’ve articulated before, which is that some such requests seem to appeal basically to respect, but aren’t actually consistent with respect. Like, there’s such a thing as courtesy, where many non-Catholics will still address a priest as Father, or total strangers will call PhDs Doctor, and it’s basically a matter of politeness and not anything meant, and not something anyone can be made to do, either. But I think we more commonly think of respect as something that should be sincerely felt and given.
So for instance I run into problems processing requests for me to “just use the damn pronouns,” as a matter of common decency or respect no less, because it seems like I’m being asked to pretend. To put on a show of affirming a chosen gender when I actually don’t. I certainly haven’t had the experience, so maybe I’m misunderstanding it, but from my armchair that sounds more offensive.
But I think we more commonly think of respect as something that should be sincerely felt and given.
If you want self-esteem, you want to be given heartfelt respect.
If you want to demonstrate power, you want even people who dislike you to feel the need to acquiesce.
Pronouns are a mix, I think, or varies by individual.
In the case of titles, using them is also a demonstrating respect for the organization that bestows them. Someone who feels the Catholic church has zero authority may chaffe at using the term ‘Father’ in the same way they would a doctor with a mail-order diploma or a general appointed by a tin-pot dictator.
I think there’s a big difference between Koran burning and Mohammed drawing.
I don’t see one (assuming you mean drawing Mohammad as deliberate provocation rather than without realising that it’s offensive). Is burning a Bible significantly different to desecrating a host? That’s the same thing.
I don’t see one (assuming you mean drawing Mohammad as deliberate provocation rather than without realising that it’s offensive). Is burning a Bible significantly different to desecrating a host? That’s the same thing.
I did try and explain the difference, but I’ll give it another go. A cartoon is something we use to communicate an argument. The medium is not offensive in our culture, even if the message can be. (some) Muslims reject any right of even non-Muslims
to draw Muhammad in any way because in their culture they see artistic depictions of the sacred as blasphemous. We do not. The offense was due to the foreign cultural values the minority group was trying to impose as a one-sided norm.
When an artist uses a cartoon to make a point, he is not doing something we view as fundamentally offensive. This doesn’t hold for book burning.
There have been times when drawing Muhammad was done as a way of pushing back against the encroaching norms against speech that minority groups claim offense at. This is somewhere in the middle, it has a clear message and argument to it beyond just the hurt feelings–establishing a universal norm.
I don’t know exactly the form Meyer’s desecration took. It might fall under one category or another. Edit: Looking this up, I found a report that he threw it in the garbage and took a picture. This seems like a deliberate gesture of contempt rather than making an argument in pictoral form; much closer to book burning than drawing a cartoon is, and also not a commonly done practice is a variety of other contexts.
I don’t see one (assuming you mean drawing Mohammad as deliberate provocation rather than without realising that it’s offensive).
The binary you’ve presented begs the question. There are spaces, e.g., art, where one may wish to draw Mohammed that is both not a deliberate provocation and while realizing that some find it offensive.
There really are no spaces where one would burn a Koran other than for deliberate provocation and offense.
@Randy M
I think you’re isolating the act of cartoon drawing from the motive but not doing the same for book burning. If you ignore motive, there’s nothing wrong with drawing any cartoon, but equally it’s fine to burn books if say you would otherwise freeze to death. If you’re going to say that book burning or host desecration are different because they’re contemptuous, then you need to also defend the motives of cartoonists. I think that it is reasonable to argue that say the Jyllands-Posten cartoons were less “objectively offensive” than in some sense than someone burning a Koran or Bible, but that’s entirely based on the intent and context, not the act itself. Burning a book as protest against severe religious censorship would be far less “objectively offensive” than either (indeed I would view that as morally commendable).
I think that it is reasonable to argue that say the Jyllands-Posten cartoons were less “objectively offensive” than in some sense than someone burning a Koran or Bible, but that’s entirely based on the intent and context, not the act itself.
I dont think offense can be objectively determined. The cartoons episode just demonstrated an incompatibility between western culture and Islam. We dont have the notion that drawing something is offensive, while in Islam, drawing Mohammed or Allah is a big deal.
There is another level of offense that can occur if the drawing itself is disrespectful, but that is not what Islam forbids, Islam forbids any kind of drawing.
In this case anyways, when liberal western culture ran up against Islamic teachings, liberal western culture folded like a cheap suit.
think that it is reasonable to argue that say the Jyllands-Posten cartoons were less “objectively offensive” than in some sense than someone burning a Koran or Bible, but that’s entirely based on the intent and context
Well of course it’s based on context. In the context of the cultures in which the controversy took place, book burning is always an offensive act, and cartooning (in a publication) is always a valid form of expression (and the message itself may be an offensive one, but that wasn’t what the dispute hinged on). Whereas in the Islamic culture, any depiction of their sacred objects is offensive. That’s fine for them, but we don’t have an obligation to live by their rules. I consider it superogatory to honor others by their own customs.
Now, is there some situation where throwing an object in the garbage is a valid form of expression? It’s less of a defined symbolic act, but the medium seems to me significantly closer to book burning that cartoon drawing.
In western society, offensive cartoons are just a part of life. See: 90% of political cartoons.
When a cartoonist draws something heaping abuse on their outgroup, people who respond with death threats are treated as the problem. But apparently when someone draws Mohammed, even in an innocuous context, the fargroup are totally in the right to issue threats and really the cartoonist should have known better.
When a cartoonist draws something heaping abuse on their outgroup, people who respond with death threats are treated as the problem. But apparently when someone draws Mohammed, even in an innocuous context, the fargroup are totally in the right to issue threats and really the cartoonist should have known better.
The principle seems to be that everyone should be tolerant and practice liberal values, except Muslims. They have a right to be homicidally offended, though Leftists never make it clear why.
We can infer that they consider Muslims a race, so this is “punching up” against racism, but I’ve never seen anyone murdered for making caricatures of black people. So the exception is unique and inscrutable.
They have a right to be homicidally offended, though Leftists never make it clear why.
Because we don’t tend to believe it. The ones that do believe it are generally happy enough to inform you of their crazy beliefs and you’re free to write them off as fringe nuts. What you don’t get to do is imply, constantly, every chance that you have, in every open thread it comes up, that this is anything more than a fringe element of the left, and I still want you to stop doing so.
So it seems like you’re making the distinction between:
a. Doing stuff that’s intended as offensive in our culture and taken as offensive in their culture.
b. Doing stuff that’s normal in our culture and taken as offensive in their culture.
With (b), you might do it without trying to be offensive, just as part of your normal day-to-day life. With (a), it can only be done as an intentional provocation.
The principle seems to be that everyone should be tolerant and practice liberal values, except Muslims. They have a right to be homicidally offended, though Leftists never make it clear why.
I don’t think many people justify something like the attack on Charlie Hebdo. Many people think the cartoonists were bad people for the way they behaved toward Islam, but outside of fundamentalist Muslim circles, I don’t think you’re going to find many people saying shooting the cartoonists was justified.
Such a distinction assumes that all that matters is whether you know that something is offensive is to someone. However, if people have to refrain from anything that is offensive to someone or to a group, then almost nothing is allowed anymore.
I reject the idea that if you know that people take offense, you are obliged to change. At the very least, there should be a cost vs benefit decision, although in a somewhat liberal society, you simply have the right to do everything that is not banned by law.
Now, is there some situation where throwing an object in the garbage is a valid form of expression? It’s less of a defined symbolic act, but the medium seems to me significantly closer to book burning that cartoon drawing.
That’s not the relevant analogy. My point is that
in the Catholic culture, disposing of certain crackers in a perfectly normal way is offensive.
And therefore if you are being consistent you should also conclude
That’s fine for them, but we don’t have an obligation to live by their rules. I consider it superogatory to honor others by their own customs.
in this case.
@Gobbobobble
Is that directed at me, and therefore either wilful or incredibly stupid mischaracterisation of my beliefs? Or are you injecting irrelevant low-quality snark at some non-present and quite possibly imaginary outgroup? Either way, it’s bad and you should feel bad.
in the Catholic culture, disposing of certain crackers in a perfectly normal way is offensive
That’s not a direct quote, but rephrased like that, yes, this is a less offensive act in American culture than burning a book.
Because of other factors discussed (intent, message, etc) I’d put Meyer’s action on par with “draw an offensive cartoon of Mohammed to piss off censorious muslims” and well above “treat Mohammad like you would any other person and use his image to make a point.” Which is about on par with the cleaning lady accidentally throwing out the wafer left in the wrong place.
Myers’ act was not simply about disposing of certain crackers. He acquired consecrated host to perform it, which requires passing oneself off as a Catholic and making off with the host in secret. This is more akin to stealing a Bible from a church to be burned than simply buying one, and it is phenomenally disrespectful.
Is that directed at me, and therefore either wilful or incredibly stupid mischaracterisation of my beliefs? Or are you injecting irrelevant low-quality snark at some non-present and quite possibly imaginary outgroup? Either way, it’s bad and you should feel bad.
It was in response to the general line of “cartoons aren’t offensive in the West”, actually, but if you wanna sanctimoniously clutch pearls, go right ahead.
If you are going out of your way to piss someone off, I’m generally going to think you’re kind-of an asshole. If you’re going about your normal business and piss someone off inadvertently, I’m not generally going to think you’re an asshole.
If some non-Catholic goes up and takes communion out of ignorance, or even out of just wanting to blend in and not caring much about our rules, I’m not going to think they’re a bad person. Even if they walk out with the consecrated host in their hand without intending to do anything offensive, I think they’re just people who inadvertently did something offensive to the beliefs of Catholics. But if they plan to take the consecrated host and publish pictures of tossing it in the trash or stomping on it or something, going out of their way to offend people, I’m going to think poorly of them. There can be reasons why you need to do something that offends someone else (think True, Necessary, Kind), but if your goal is “this’ll really piss those bastards off,” you’re probably just being a jerk.
The ‘asshole’ thing seems to be a superweapon based on appeal to politeness rather than formal rules. Problem is, since it fails to apply any judgement to the demands being made, just the refusal to accede to them, its effect is to ratify unreasonable demands.
A Catholic priest expecting you not to take a consecrated host from his hand, conceal it, and later defile it is one thing. An Imam demanding that no one draw Muhammed is quite another, and I’d argue if you’re going to apply some sort of “asshole” standard, those making demands that no one draw Muhammed are the assholes.
Because we don’t tend to believe it. The ones that do believe it are generally happy enough to inform you of their crazy beliefs and you’re free to write them off as fringe nuts. What you don’t get to do is imply, constantly, every chance that you have, in every open thread it comes up, that this is anything more than a fringe element of the left, and I still want you to stop doing so.
What is the mainstream leftist (for values of “left” that extend as far as Theresa May) position on violent crimes (crimes under Western law, that is) inspired by Islam?
When they disagree with another belief system, like Christianity or racism, they seem very concerned about stopping it well short of the feared actions, not allowing the number of people with odious beliefs to increase, etc. So if orthodox Islam inspires certain violent acts, what is the Left proposal to stop it? What’s your equivalent of an immigration ban?
(I’d be thrilled if the answer was “an immigration ban”, because then it would be consensus and I wouldn’t have to support Trump.)
What is the mainstream leftist (for values of “left” that extend as far as Theresa May) position on violent crimes (crimes under Western law, that is) inspired by Islam?
Thank you for asking this rather than making assumptions. I very genuinely appreciate that.
The mainstream leftist opinion is that crimes inspired by Islam are dangerous and need to be halted. You see this in no leftist parties being in favor of putting Islam above the law, none of them asking for muslims to be pardoned, or even just giving them lighter sentences. The US hasn’t had a large attack in a while, so, enjoy.
This is an organization of ex-muslims, mostly homosexuals and atheists.
They say they have no political home– the right isn’t fond of homosexuals and atheists and the left is unwilling to acknowledge how dangerous mainstream Islam is to ex-Muslims.
For what it’s worth, I mostly hang out in leftish circles, and it took me some work to internalize just how much Muslims and ex-Muslims are at risk from other Muslims.
When a cartoonist draws something heaping abuse on their outgroup, people who respond with death threats are treated as the problem. But apparently when someone draws Mohammed, even in an innocuous context, the fargroup are totally in the right to issue threats and really the cartoonist should have known better.
is evidently snarking at someone (unless you mean to say you are genuinely expressing that opinion). Who?
The mainstream leftist opinion is that crimes inspired by Islam are dangerous and need to be halted. You see this in no leftist parties being in favor of putting Islam above the law, none of them asking for muslims to be pardoned, or even just giving them lighter sentences
I don’t recall anyone on the right suggesting that James Alex Fields should be pardoned, either, so I guess we’re all one big happy family here.
But what’s the left’s position on how Islam should be treated for its role in inspiring crime? If crimes inspired by Islam are dangerous and need to be halted, should the left perhaps be looking to halt them at the source?
Because that’s something the left seems to favor in other circumstances. James Alex Fields, therefore the far right is a Nazi menace that needs to be stopped from inspiring dangerous crimes; it should be opposed and ridiculed and punched and driven from polite society wherever lefitists hold sway. Some anti-abortion murders that mostly petered out a decade ago, therefore Fundamentalist Christians are a menace that needs to be stopped, etc. Elliot Rodger, therefore Incels are the scum of the Earth and need to be thoroughly marginalized less they inspire any more crimes. Etc, etc, etc.
As Nancy Lebovitz notes, Islam inspires its followers to go out and kill anyone who decides not to be a Muslim any more, and they’re not at all secretive about that. Therefore, Islam is…
You’re Americans. Signaling your support of Islam is cheap in countries where muslims are a statistical footnote. Half of them are your own people turned convert, they’re not particularly orthodox, and blend in very easily due to these things. Noticing that they’re not treated as harshly by the left as people arguing for abortion or the murder of all women is something Scott picked up on five years ago with I Can Tolerate Anything But The Outgroup: muslims are a fargroup for American leftists. Extremely few of them know any muslims, fewer still know any conservative muslims, and they don’t have a visceral reason to care. Signaling allegiance to the ingroup is much more important to them than worrying about a bunch of native-born blacks and whites who converted to a trendy religion.
In here, the response to the question is that we haven’t gone so full-on partisan that nobody can get along anymore. Leftists that govern do actually have to deal with muslims that aren’t basically regular people who stopped eating pork and so going along with whatever they say isn’t at all viable, so I feel comfortable to say that you’re noticing a particularly American problem here.
@Le Maistre Chat @John Schilling
The mainstream leftist view is approximately:
Islamic terrorists : Islam :: Westboro Baptist Church : Christianity
(This isn’t to say the magnitude of wrongdoing is remotely comparable, but the attitude that “these terrible, nonrepresentative Muslims/Christians have twisted the core, peaceful teachings of their religion and do not represent the vast majority of nonviolent/non-asshole Muslims/Christians” is the same.)
The mainstream leftist view is approximately:
Islamic terrorists : Islam :: Westboro Baptist Church : Christianity
OK, but the mainstream leftist (as opposed to merely liberal) view seems to be that organizations like the Westboro Baptist Church prove that Christianity as a whole needs to be kept down lest it return to its bad old ways across the board, that Christians should be presumed to be bible-thumping joy-hating gay-lynching bigots until proven otherwise, and that anyone who isn’t that will have the decency to not call themselves “Christian” in public.
“You built a factory out there, good for you. But I want to be clear. You moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for.”
“Now look, you built a factory and it turned into something terrific or a great idea, God Bless, keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and paid forward for the next kid who comes along.”
“I hear all this, you know, ‘Well, this is class warfare, this is whatever.’ No. There is nobody in this country who got rich on his own – nobody.”
“Other countries around the world make employees and retirees first in the priority. For example, in Mexico, the bankruptcy laws say if a company wants to go bankrupt… obligations to employees and retirees will have a first priority. That has an effect on every negotiation that takes place with every company in Mexico.”
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
Christians are not oppressed by the left. They are allowed to congregate in places of worship unmolested. They are not fired for being Christian except when Christians insist their religion interferes with the course of their job, which happens to people of every religion (a Muslim taxi driver who refuses to drive drunks around because alcohol is forbidden is soon going to be a Muslim without a job). They are not attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door. Christians are allowed to preach peaceably in public places, including state-funded places such as colleges – I know, because there was always a dude with a sign about how much God hated me at my college.
Look at Soviet Russia, modern day China, or modern day Iran for a picture of what religious persecution of Christians looks like. Hell, read the last few books of the Bible for a picture of what religious persecution looks like – or even Reformation Europe.
We don’t have a lot of religious oppression in the US, thanks to laws going all the way up to the constitution forbidding it. On the other hand, in mainstream media culture, things that offend Christians seem less upsetting to a lot of people who get a lot of airtime than things that offend some other group–Muslims, for example.
As best I can tell, this is pure “I can stand anything but the outgroup.” For the kind of people who become media elites, American Christians (especially fundamentalist Christians) are the outgroup, whereas Muslims are a faraway group that doesn’t register in local conflicts.
Keep in mind that nearly seven-tenths of the country is Christian. Even media elites can’t actually afford to do stuff that is outright offensive to every Christian – that would completely wreck their ratings. They can afford to piss off fundies, but that’s because there’s so many moderates that will just roll their eyes and move on.
@jermo sapiens
I imagine they will be allowed to congregate in places of worship unmolested. They will not be fired for being Christian except when they insist their religion interferes with the course of their job. They will not be attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door. Christians will be allowed to preach peaceably in public places, including state-funded places such as colleges.
@TakatoGuil
You are literally copy and pasting part of your prior answer. Please don’t do that. But since you did, I’ll respond to a few more directly.
I imagine they will be allowed to congregate in places of worship unmolested.
Beto was saying, what, last week that we should revoke tax exemption for churches that don’t support same sex marriage? How exactly will they congregate in places of worship unmolested when they are being taxed out of existence?
They will not be fired for being Christian except when Christians insist their religion interferes with the course of their job,
It matters a lot who decides what “the course of their job” is. Like deciding that hospitals are required to perform abortions or that doctors are required to make referrals for procedures they consider harmful.
They will not be attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door.
That must be why Senators have been quizzing judicial nominees in their membership in the Knights of Columbus. Because membership in a private charitable organization is just part of leaving their faith at the door.
ETA: removed some snark
I know, because there was always a dude with a sign about how much God hated me at my college
I suspect this has shaped your attitude towards Christians and I dont blame you. Hopefully you’ll come to see that most Christians see the “dude with the sign” as a complete moron.
@Nick
If Jermo wants to ignore my initial response to him in favor of “If the oppression I’m baselessly claiming exists now is as bad as I baselessly say it is, imagine how much worse this slippery slope fallacy could be later!”, then I will repeat my point to him as many times as it takes.
Anyway, Beto wants churches to be held to the same standards as other 501(c) organizations, which are not allowed to use their exempt money for political purposes. Churches do that today. It is not oppression to say that they should follow the same laws that apply to everyone.
As for doctors, part of the job is getting patients treated. If you refuse to do what the evidence-based consensus says treats the problem, and you refuse to refer them to someone who will in America’s shitty medical system where referrals are as necessary as they are, you are impeding treatment and yes, need to be removed. Personally, I’d rather we have a saner medical system so that referrals weren’t necessary, which would seem to resolve the issue.
The Knights of Columbus thing was wrong (I wasn’t surprised to see Harris involved when I googled it — I really don’t like her), but I don’t think it stopped the judge from getting the position and even the Washington Post called it bullshit so that’s left-leaning media stepping up for Christians, not oppressing them. Also note that it’s specifically anti-Catholic bigotry, not anti-Christian bigotry. The former was a historical problem and it’s not surprising to see that there’s still some vestiges left of it today. Hopefully journalists on both sides of the aisle will continue to decry it when it occurs, and judges will continue to be confirmed despite any bigoted attempts against them.
Beto wants churches to be held to the same standards as other 501(c) organizations, which are not allowed to use their exempt money for political purposes. Churches do that today. It is not oppression to say that they should follow the same laws that apply to everyone.
Some churches may be abusing their status, but the question Beto was answering was whether it should be revoked for opposing same-sex marriage. That’s not political purposes. Here’s the exchange:
“Do you think religious institutions like colleges, churches, charities should they lose their tax-exempt status if they oppose same-sex marriage?” Lemon asked.
“Yes,” O’Rourke replied. “There can be no reward, no benefit, no tax break for anyone, or any institution, any organization in America, that denies the full human rights and the full civil rights of every single one of us. And so as president, we are going to make that a priority, and we are going to stop those who are infringing upon the human rights of our fellow Americans.”
As for doctors, part of the job is getting patients treated. If you refuse to do what the evidence-based consensus says treats the problem, and you refuse to refer them to someone who will in America’s shitty medical system where referrals are as necessary as they are, you are impeding treatment and yes, need to be removed. Personally, I’d rather we have a saner medical system so that referrals weren’t necessary, which would seem to resolve the issue.
The doctor’s job is deciding what is and isn’t treatment. Doctors are not the instruments of the patients’ wills. That was actually a case in Canada, anyway (where jermo is from), where referrals are not so necessary.
The Knights of Columbus thing was wrong (I wasn’t surprised to see Harris involved when I googled it — I really don’t like her), but I don’t think it stopped the judge from getting the position and even the Washington Post called it bullshit so that’s left-leaning media stepping up for Christians, not oppressing them. Also note that it’s specifically anti-Catholic bigotry, not anti-Christian bigotry. The former was a historical problem and it’s not surprising to see that there’s still some vestiges left of it today. Hopefully journalists on both sides of the aisle will continue to decry it when it occurs, and judges will continue to be confirmed despite any bigoted attempts against them.
I appreciate that, FWIW, but I don’t think things are quite the way you put it. For one thing, anti-Catholic bigotry waned a long time before this, and the recent stuff is not coming from Protestant objections to Romish practices, so I think the recent waxing is something new. For another, it wasn’t just Harris, it was also Senator Hirono and, with the Amy Coney Barrett and People of Praise case, Senator Feinstein.
I would love to see it wane again, but I don’t think that’s the trend. And regardless, counterexamples are helpful when you make a blanket statement.
If Jermo wants to ignore my initial response to him in favor of “If the oppression I’m baselessly claiming exists now is as bad as I baselessly say it is, imagine how much worse this slippery slope fallacy could be later!”, then I will repeat my point to him as many times as it takes.
I’m not ignoring it. Others have answered your earlier point quite well, and I didnt feel the need to add anything.
And my point is that at 70%, Christians currently punch well below their weight in terms of cultural influence, and they are the designated outgroup for the elite. You are almost right when you say “media elites can’t actually afford to do stuff that is outright offensive to every Christian”, they get away with as much as they can considering the 70% figure. As that number goes down, they will be able to get away with more and everything indicates they will do more.
I invite you to visit a 70% Muslim country to see the difference.
You are in fact stating the oppression that people are concerned with right now.
“just being forced to follow the law” is a problem if the law says that Christians can’t for example affirm basic doctrines of their faith without losing the protection of that law.
If it is a basic doctrine of my faith that homosexuality is sinful and that dressing as the opposite sex from your birth sex is sinful (both are true), then requiring me to violate that in order to have the protection of law IS oppression, just as much as it would be to deny a Muslim woman the protection of law unless she removes her hijab.
And saying “the law in its majestic equality makes both Christians and Muslims remove head coverings” is not changing the fact that such a law is oppressive.
The doctor’s job is deciding what is and isn’t treatment. Doctors are not the instruments of the patients’ wills. That was actually a case in Canada, anyway (where jermo is from), where referrals are not so necessary.
My brother is a Dr in Toronto and he’s devoutly Christian (unlike me). If a patient asked to be euthanized he would have to give him a referral or be expelled from the College of Physicians. In which case, Canada would lose a specialist in internal medicine to the USA.
The Knights of Columbus thing was wrong (I wasn’t surprised to see Harris involved when I googled it — I really don’t like her), but I don’t think it stopped the judge from getting the position and even the Washington Post called it bullshit so that’s left-leaning media stepping up for Christians, not oppressing them. Also note that it’s specifically anti-Catholic bigotry, not anti-Christian bigotry. The former was a historical problem and it’s not surprising to see that there’s still some vestiges left of it today.
Whoah whoah whoah, hold up. I don’t like Harris either, but calling this out as anti-catholic bias is going to far. First up, disclaimer. I was born and raised Catholic and went to Catholic school. I’ve had to deal with many Knights of Columbus over the course of my life.
If you go to the list of questions asked to the judge about the Knight’s of Colombus, they all refer directly back to specific policy positions advanced and advocated by the Knights of Columbus.
If someone is coming before you and is a member of a group that advocates for some extreme positions (beyond those of the catholic church) then asking if someone who’s a member of that organization if their past affiliation with that group will influence their future judgement isn’t beyond the pale.
If someone is coming before you and is a member of a group that advocates for some extreme positions (beyond those of the catholic church) then asking if someone who’s a member of that organization if their past affiliation with that group will influence their future judgement isn’t beyond the pale.
(emphasis mine)
I don’t know what you mean. In the first set of questions, the KoC donated against legalizing same-sex marriage; that is a position of the Catholic Church. A KoC magazine article said contraceptive pills can have bad side effects on reproduction, mate selection, etc.; that’s not the position of the Catholic Church or KoC because it’s an entirely empirical matter*. In the second set of questions, the KoC leader said abortion is the killing of the innocent, and the position of the Catholic Church is that abortion is the killing of the innocent, and then the same-sex marriage question comes up again.
So no, none of these positions is over and beyond the teaching of the Church, nor are they or should they be out of the ordinary for Catholics to believe. So insofar as membership in this organization is taken to be disqualifying, then membership in the Catholic Church must be taken to be disqualifying. That’s a religious test for office.
ETA: emphasis in quote
ETA: *Okay, on second thought, I’m playing a little fast and loose here. The Catholic Church, and the KoC, take plenty of empirical stances: prayer works, God exists, etc. But even supposing, rightly I’m sure, that the magazine is run with an editorial policy in mind, this can hardly be construed as “the position of the KoC.” The KoC could start campaigning against contraceptive pills for their bad side effects, but to my knowledge they haven’t. One reason for suspecting they haven’t is that if they had, we would surely have heard about that instead of this article.
@jermo
Until Nick responded specifically to my repetition, no one actually had addressed my general point – Nick had a specific sort of quibble, but I explained what I meant by it and he agreed in a general sense. And for the record, I was a Christian at the time I was in college and I rolled my eye at him as much as anyone else. Fact remains, Christians are allowed to come onto government-funded institutions and profligate hate speech against me, so the idea that they are being oppressed, thrown to lions, etc, remains laughable to me.
However, comparing America to the 70% muslim Kazakhstan, I see that there is little freedom of the press there. Is that what you think is necessary to protect Christians from oppression?
@Echo
“You can’t be this kind of tax-free entity and produce political ads,” is what I meant, because it was what I erroneously thought Beto had said (I misremembered, and am embarrassed for having done so). It is very different than a no face coverings law because it is not specifically designed to be anti-Christian. There is nothing in the Christian faith that requires that churches be able to participate in the political arena, and the freedom of religion that is understood in America generally expects that they be separate entities.
@Nick
I was aware of the Hawaiian as well (I said “Harris involved”), but wasn’t aware of the second incident. Still though, apparently Aftagley has a better defense for the situation than I do. (EDIT: Or you’ve already noticed, but either way I’m bowing out of this whole thing because trying to debate with you, an enjoyable partner who I have enjoyed talking to this morning, is a lot less fun when I’m going to get called an Orwellian dick by random passersby, so I hope Aftagley is fun to talk to!) As stated above, I misremembered the Beto incident (I genuinely thought it was a 501(c) thing; those come up a lot), and can’t say I’m impressed with that kind of attitude. I’m a gay man and I don’t expect any church to be forced to marry me to my future husband. Frankly, I wouldn’t even want them to, but I’d much prefer a venue that likes its practitioners. As for the doctors thing, there are medical boards and guidelines doctors have to follow regardless of their inclinations for the exact reason that doctors aren’t always right. They don’t have to be instruments of their patients’ wills, but a person shouldn’t be rolling the dice to see if they’ll be able to get a treatment that’s available to the public or not. It’s their health, not the doctor’s. TBH, it feels to me like if Caesar says, “Send them to a different doctor,” that’s a pretty simple “submit to Earthly authorities,” deal that the Bible is pretty clear Christians are supposed to be doing.
That’s not even quite the law. Churches are allowed, as are other 501c3 organizations, to make statements on ISSUES, but they are not allowed to endorse candidates. They may also endorse and fund ballot measures. https://www.irs.gov/newsroom/charities-churches-and-politics
They are also limited in what they can spend to lobby, but those limits are no different for churches than any other 501c3. So Beto is indeed asking for a specifically anti-Christian law, for certain Christian interpretations.
Given the corrected interpretation, do you agree this is a specifically anti-Christian law?
If you agree with the first, do you agree that Beto is in fact attempting to oppress Christians, whether successful or not?
(EDIT: Or you’ve already noticed, but either way I’m bowing out of this whole thing because trying to debate with you, an enjoyable partner who I have enjoyed talking to this morning, is a lot less fun when I’m going to get called an Orwellian dick by random passersby, so I hope Aftagley is fun to talk to!)
Sorry to hear that, it’s been a good conversation. @Aftagley is a decent guy so I think it will continue to be a good one.
I hope so, but I’m having trouble writing a defense of my previous statement. Usually when that happens, it means the position I’m trying to defend is weak, so I’m not sure how good a discussion partner I’ll be here. (Also I’m pretty slammed with work today).
First off – you’re correct. The KoC doesn’t stray from official catholic teachings. I’d argue they cherry-pick which aspects of catholic teachings they care about supporting politically, but their positions are defensibly catholic. In this aspect, they aren’t “extreme.” Re-reading my point, it looks like I’m alleging this, so I’ll retract that claim.
That being said, compared to polling of the catholic community in America, they are pretty extreme. KoC continues to denounce gay marriage, while a majority of US Catholics support it. Same with abortion – a majority of US Catholics are in favor while KoC continues to take very extreme measures against abortion (I don’t care what your beliefs are, intentionally deceptive Crisis Pregnancy Centers are ethically suspect and qualify as extreme). This keeps happening – average catholic opinion, at least in America, corresponds far closer to national averages on topics than it does to official church teachings.
So, that’s what I meant – the KoC is a conservative subgroup picked from a pluralistic larger community of Catholics. In the same way that disliking an exclusively left-handed group of avowed Marxists wouldn’t imply a larger distaste for the sinister folk among us, I don’t think that being leery of a Knight of Columbus implies any larger anti-Catholic bias.
On second thought, no, my argument above is bad. I can’t simultaneously claim that the Catholic church as a community is a pluralistic group that isn’t always bound by doctrine AND that the knight’s of Columbus are all totalitarian and only follow church doctrine.
Hmm, I’m not sure where to go from here. I still don’t think that her asking those questions indicates an anti-catholic bias, nor do I think they were unfair to ask, but I’m having trouble figuring out why.
Beto seems to argue that churches should be taxed when they engage in anti-gay marriage politics, but not when engaging in pro-gay marriage politics. Then the defining characteristic that he wants to tax is not them engaging in politics, but the contents of their politics. Since that political stance is a part of certain religions, that is political as well as religious persecution.
This keeps happening – average catholic opinion, at least in America, corresponds far closer to national averages on topics than it does to official church teachings.
This is such a well-known fact that Americanism was declared a heresy in 1895.
(However, what was Americanism in 1895 is basically official Catholic doctrine since Vatican II. So I don’t know where this leaves us.)
On second thought, no, my argument above is bad. I can’t simultaneously claim that the Catholic church as a community is a pluralistic group that isn’t always bound by doctrine AND that the knight’s of Columbus are all totalitarian and only follow church doctrine.
Hmm, I’m not sure where to go from here. I still don’t think that her asking those questions indicates an anti-catholic bias, nor do I think they were unfair to ask, but I’m having trouble figuring out why.
I think some American thinking on this point tends to be torn between a sort of radical individualism and reality, and we often don’t have very coherent ways of thinking about how various forms of group membership should intersect.
From a hyper individualistic point of view, there’s nothing particularly special about religious belief over any other form of belief. If you’re supposed to wear a certain hat for a job, either everyone has to wear it (doesn’t matter if you’re Sikh; no one gets extra privileges as virtue of some other group membership) or no one has to wear it (we can all wear whatever hat or other head covering we like).
But there’s a desire for religious membership in some sense be able to allow otherwise disallowed behaviors or grant protection for certain opinions in a way that membership in other groups typically doesn’t. At least not explicitly. Because trying to stamp out the religious belief of various groups in the past has often turned out so incredibly badly.
If I was starting from scratch in utopia, I’d think about scrapping any and all religious carve outs, but only if I could widen the level of default liberty people get for most beliefs that aren’t in some way aggressive (as in, inciting physical violence or involving true threats or something).
But in the real world, religion gets some special carveouts and respect by virtue of its age and number of believers, and I’d probably prefer that to stick around on balance even though I think it’s not really defensible as an abstract principle. There aren’t many other types of organizations large enough and significant enough to people to counterbalance the power of government or to provide a source of comfort and support when government fails.
Punishing the expression of the wrong ideas via the tax code is pretty obviously going to violate the first amendment. Beto isn’t stupid, so he knows this and is saying stuff to that sounds good to his base but that he knows can never be done.
If promises that violate constitutional and human rights are a valid (Democratic) election strategy, then a lot of the (Democratic) criticisms of Trump seem rather hypocritical.
So, that’s what I meant – the KoC is a conservative subgroup picked from a pluralistic larger community of Catholics. In the same way that disliking an exclusively left-handed group of avowed Marxists wouldn’t imply a larger distaste for the sinister folk among us, I don’t think that being leery of a Knight of Columbus implies any larger anti-Catholic bias.
You’ve already given up the argument for different reasons, but I want to push back on this, anyway. In England for many centuries Catholics were forbidden from holding public office. Of course, it wasn’t a matter of blood—some cradle Catholics pursued public office by leaving the Church and becoming Anglican.
Your argument seems to be saying that it’s okay to be suspicious of the KoC for nothing other than being orthodox Catholic in its political pursuits*. In other words, if the KoC or Buescher himself simply disavowed Catholic teaching, everything would be fine. That’s still a religious test for office—it’s just one requiring heresy rather than apostasy.
ETA: *Granted, your fake crisis pregnancy centers example speaks against this somewhat.
Your argument seems to be saying that it’s okay to be suspicious of the KoC for nothing other than being orthodox Catholic in its political pursuits*.
Not quite. My opinion (read: bias) on the KoC is that they are a politically motivated subgroup that just happens to be made up entirely of Catholics. I think that the tapestry of catholic beliefs doesn’t track naturally onto either political party and that the KoC cherry-picks which Catholic causes they’ll motivate around by which ones are both Catholic and Red Tribe (IE firmly pro-life, but only pay lip service to ending the death penalty).
This beliefs is heavily influenced by my history with the KoC and knowledge of some of the membership (somewhere between 60-100 people). This article is a more eloquent explanation of my trepidation around the Knights and their current leadership.
Regarding whether “Christians are now oppressed in the U.S.A.”, my guess is that it depends on relative to what, compared to Sudan?
Mostly not, and some new ‘oppression’ is from feeling a loss of dominance, from Pew Research in 2017 on: How Americans Feel About Different Religious Groups (including atheists) in which those polled were asked to rate how warm or cold they felt about other Americans in different sects (and atheists) from 0 degrees (very cold) to 100 degrees (very warm) and compared to a similar study in 2014 most groups received warmer ratings.
The ratings were:
Jews 67°
Catholics 66°
‘Mainline’ Protestants 65° (Episcopalians, Methodists, et cetera)
‘Evangelical’ Protestants 61° (Baptists, Pentecostals, et cetera)
Buddhists 60°
Hindus 58°
Mormons 54°
Atheists 50°
Muslims 48°
so most Americans are “warm” towards Christians, but are there places and subcultures in the U.S.A. that are “cold” towards Christians?
Yes.
From The New York Times: A Confession of Liberal Intolerance By Nicholas Kristof: “…consider George Yancey, a sociologist who is black and evangelical.
“Outside of academia I faced more problems as a black,” he told me. “But inside academia I face more problems as a Christian, and it is not even close.”…
They are not attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door.
Where are these situations? Please give me a list of cases where my religion is not welcome so I know what to avoid.
I mean like, if you’re a therapist, you should be helping your patients without trying to convert them. If you’re a teacher, you should be teaching your students without trying to convert them. That kind of thing. I’m sure you’d agree that a depressed Christian shouldn’t be getting told by their atheist therapist that all of their problems are caused by “an irrational belief in sky fairy old man”?
I don’t think this is a good example. There are lots of therapists that specifically advertise themselves as “Christian therapists” who bring their faith directly into the room to help their patients.
As long as they are honest, I don’t see a problem with a Muslim/Christian/Jewish/atheist therapist using their beliefs to help their patients, and it’s societally encouraged.
@TakatoGuil
I definitely agree with all that, but there’s the rather hairier cases of whether Christians can be teachers without being made to teach gender theory in school, or whether Christians can be doctors or run hospitals without being made to do things they believe are harmful, and other such cases. My job as a Christian isn’t to convert people in the workplace, but I’d like to be help people consistently with the corporal works of mercy. Faith without works is dead, etc.
@EchoChaos
There are also specifically faith-based teachers, but the generic example of either does illustrate the point, while the exceptions only further demonstrate that Christians are about as oppressed as people who like dogs.
@Nick
There’s definitely hairy cases, but that’s more a case of the world being a complicated place that makes it impossible to make everyone completely happy than it is of Christians being oppressed at this time.
As a matter of preference, I would prefer all teachers to state outright their biases in that way rather than hide them.
If my therapist is an atheist who genuinely believes that irrational belief is causing my problems, I don’t want him to lie to me. I don’t see it being a better world when they are.
And I will note that your point to Nick is exactly the complaint that most Christians have when they say they’re oppressed. Whenever two people’s happiness is discussed in terms of two people who can’t both be happy because of a contradiction, the left between almost always and always comes down on the side of the non-Christian.
So let me try to give a generic secularist response to this, bearing in mind that the OP may not agree with it and I don’t always agree with every implication of it myself.
The relevant principles are:
(a) If you are making decisions about the operation of a public accommodation, you should have to make them for public reasons, in the philosophical sense of public reason.
(b) Religious beliefs do not count as public reasons. Neither do racial prejudices and some other kinds of prejudices. Note this is not the same as saying religious beliefs are morally equivalent to those prejudices, only that they equally must be excluded from public reasons.
So a good rule of thumb, on this view, is that if you are performing an institutional function where you would justly be prohibited from discriminating against black people, you are also justly prohibited from making decisions about that function according to your religious beliefs. This of course leaves a lot of unanswered questions, notably what should count as a public accommodation, but it sheds good light on most of the specific situations raised in this thread.
The pragmatic and historical justifications are similar too: the key claims include
(1) that decision-makers in public accommodations, even if privately operated, exercise broad-scale coercive power that must be checked to preserve the effective autonomy of others; and
(2) that there is no stable equilibrium where private institutions are free to discriminate or not, or to impose their religion or not, because historically private racist institutions, and likewise private theocratic institutions, had no qualms about using both state and private violence to get their way, so the only way to prevent that kind of intertwined oppression from taking hold again is to banish it entirely from the public square.
(2) in particular deserves emphasis because I think most religious people who feel persecuted by secularist restrictions today underestimate the degree to which those restrictions are motivated by horror at the theocratic restrictions of the past. Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again. The recent rise of “integralism” will only lend more credence to this view.
@salvorhardin says: “…I think most religious people who feel persecuted by secularist restrictions today underestimate the degree to which those restrictions are motivated by horror at the theocratic restrictions of the past. Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again…”
That’s an interesting insight as it’s puzzled me why conservative Christians in cities like (for example) San Francisco (where I work) aren’t more often treated with the blanket liberal tolerance that other faiths are (though there does seem to be more tolerance for heterodox views when the believer is also plausibly an ethnic minority), but when I think of it most of those in opposition to fundamentalists, etc. are migrants from areas where religious conservatism was dominat, usually including their family members, while those who grew up inside “blue bubbles” seldom care as much, if at all.
From my vantage point that the Baptist church doesn’t have a rainbow flag flying like the Methodist church up the street does just doesn’t seem like something to fight about, as for each their own (that Catholics do have an inside battle seems inevitable given their size and breadth of worshippers, and being the ‘universal’ church though).
Like much of ‘the culture war’ it seems to me that just declaring a truce and accepting that a monoculture for a nation this large and populous isn’t feasible anyway is the way to go (and the historic solution, they were some State churches in the early Republic, but a Federal church was forbidden with the Bill of Rights).
Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again. The recent rise of “integralism” will only lend more credence to this view.
Yes, I agree. Liberals are treating Christians, the majority of the country, as a defeated foe that must be policed.
Not a surprise that traditionalist Christians notice it: but how many reflect upon the long, cruel centuries of theocracy that might cause secularists to so passionately regard that policing as necessary?
And the “traditionalist” qualifier is key here, as modernist Christians of the “More Light”/”Open and Affirming”/etc type are not the foe and not treated as such. This matters because while certainly Christians broadly speaking remain the majority, as you say, I would dispute that traditionalists are. The polls on social issues would look very different if that were true.
It doesn’t seem very coherent to argue that the religious must be severely restricted in their freedom because the religious used to severely restrict other people’s freedom. Also, when the religious restrictions seemed to be heavily based on social policing by a large religious majority, it is not obvious at all that it is necessary to police a religious minority in a secularizing society.
Also, it is extremely and literally uncharitable to equate religion with oppression and ignore the good things they did and do.
It doesn’t seem very coherent to argue that the religious must be severely restricted in their freedom because the religious used to severely restrict other people’s freedom.
It’s perfectly coherent. If someone is going to get oppressed, then it’s preferable that it’s Them and not You. That reasoning doesn’t give you the moral high ground, but having the moral high ground isn’t much comfort when you’re being oppressed.
Also, when the religious restrictions seemed to be heavily based on social policing by a large religious majority, it is not obvious at all that it is necessary to police a religious minority in a secularizing society.
But Christians are not a minority in America. That’s a running complaint in this thread, that Christians should be treated better because they’re a majority. And if society is secularising, it’s doing it so slowly and listlessly that it’s very easy to imagine the direction turning at any moment.
This is a defensible position. It’s also NOT the one general liberals claim, which is that Christians aren’t oppressed and that their complaints that they are should be taken as nonsense.
I agree with the statement that we’re talking about traditional Christians, but saying “just change your beliefs and we won’t oppress you” hardly makes it better. Nor does the fact that such Christians are a minority.
I do think you have described the actual motivation well, although I think your view on how bad Christian oppression was is pretty historically inaccurate.
(2) in particular deserves emphasis because I think most religious people who feel persecuted by secularist restrictions today underestimate the degree to which those restrictions are motivated by horror at the theocratic restrictions of the past. Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again. The recent rise of “integralism” will only lend more credence to this view.
The thing is that, for a variety of reasons, that fear is way off base today. I feel like I need to do an explainer on integralism. I’m not really the person for it, but I’m not sure there’s a better person for it on SSC, either.
@Nick @TakatoGuil
Re: doctors, I think the most equitable norm would be this: A Christian doctor who believes that abortion is murder should, if advising a patient with an unwanted pregnancy, clearly state that they believe abortion is wrong, and offer to refer the patient to a different doctor if the patient’s beliefs conflict with that. Ditto re: HRT for gender-dysphoric patients or whatever else you’re alluding to.
I feel like this avoids forcing doctors to advocate procedures they believe to be harmful, while not closing off options for the patient if the patient believes those procedures to be helpful.
@thevoiceofthevoid, I see where you’re coming from on the basis of social policy, but – as a pro-life Christian myself, though not a doctor – I would not be comfortable with that. You’re essentially telling people like me that, while we don’t need to personally perform (what we consider to be) murders, we need to make personal referrals to hitmen.
It’d be better to let health insurers, or even the government, maintain such a registry themselves.
@Evan Þ
You make a fair point. I’ll weaken my recommendation to the pro-life doctor saying, “I believe abortion is fundamentally wrong, if this gravely conflicts with your beliefs then you should find another doctor,” rather than actively making a referral.
You’re essentially telling people like me that, while we don’t need to personally perform (what we consider to be) murders, we need to make personal referrals to hitmen.
This is a very real issue for my brother who is a doctor in Toronto, and who could lose his license if he refused to refer someone who was seeking to be euthanized.
On doctors specifically, one complicating factor is that the supply of medical practitioners is artificially restricted. It’s less defensible to refuse services on the ground that “they can go somewhere else” if you benefit from the use of state power to narrow the range of other places they can go.
On intolerance for traditionalist religion generally, I should clarify that I personally believe this has gone too far in a way that undermines liberal principles of tolerance; as obnoxious as Jack Phillips, Hobby Lobby, etc are they should be free to run their businesses according to their values, just as secularists should be free to boycott them according to ours. The point I am making is that if you have historically not practiced tolerance toward others, you probably won’t get a positive reception when you demand tolerance for yourself.
The Alliance Defending Freedom provides a good example here. A lot of the cases they take really are defending private religious people and organizations who just want to live out their beliefs in peace in their private lives. But when major figures in the organization work to defend the criminalization of homosexuality– mostly in other countries these days since they realize they’ve well and truly lost that fight here– their claim that they’re just trying to defend religious freedom rings hollow. And given the level of state and private violence that gay people were historically subjected to in the US before the defeat of traditionalism, and the level of violence they are still subjected to in those other countries, it’s understandable that an organization that elevates supporters of that violence would be condemned as a hate group, even though that overbroadly condemns those of its members who genuinely care about religious freedom and don’t want the state to impose their own religious beliefs on others.
Yes, but show me a person who doesn’t get sniped at for something? I’m not keen on situations where some people or groups of people get sniped at a lot more than others, without being personally at fault – but I don’t see Christians as being in that situation in the US.
Put another way, I own a political tee-shirt with a slogan of “no special rights for Christians”. The set of places where I could wear it without problems is much more limited than e.g. the set of places where Christians can and do prominently and visibly self-identify (via jewelry, bumper stickers, etc.) And I live in California, in particular the SF Bay area.
And note that the sentiment I’m (not) expressing is not “no rights for Christians” but in effect “no rights for Christians that aren’t also available to atheists, Muslims, Jews, Bahai, Hindus, and members of random New Age sects”.
Note also that the problems I’d experience would (just) be lots of offended Christians, and a few people who dislike politics being brought into the workplace etc. That’s enough to stop me from wearing it to anything much except a political rally protesting anti-non-Christian political speech or action, because I don’t especially want to snipe at random people in my environment.
Also, finally, I bought the shirt while living in Colorado, during the decades where that was home base for some loud and well known Christian conservative movements. Having those people around for too many years makes it very easy for me to respond to “Christianity” emotionally as being all about using the power of the state to mind your neighbour’s business, in betwen committing nasty public acts of cruelty (e.g. picketing someone’s funeral to announce that the death was God’s punishment for sin.) I know not all Christians are like this – intellectually at least – but my gut insists those people are the majority, or the ones with real power.
Am I sniping bringing this up? Not any more than other posters on this blog (not this thread) who’ve opined that no one can be moral without religion. I.e. I’m not chosing the option of maximum kindness, but neither are they. And neither of us are sniping per se.
I suspect it’ll be a cold day in Hell before I get over my visceral reaction to in-your-face Christians, and anyone who mixes Christianity with politics. But I am capable of not letting it affect my behaviour to random small-C christians, at least 99.9% of the time.
OTOH, if you turn up at my door to proselytize, better hope I’m in a good mood, and manage a somewhat frosty “I’m not interested in discussing religion, thank you” while firmly closing my door.
I own a political tee-shirt with a slogan of “no special rights for Christians”. The set of places where I could wear it without problems is much more limited than e.g. the set of places where Christians can and do prominently and visibly self-identify (via jewelry, bumper stickers, etc.) And I live in California, in particular the SF Bay area.
Suppose I owned a political tee-shirt with a slogan of “no special rights for blacks”, or “no special rights for LGBTQ”. What do you think is the set of places I could wear it without problems?
And what do you consider to be “the set of places”? Is it strictly geographical – the total number of acres where I could wear such a shirt without problems – or might it also be in the virtual realm of online communication? (Suppose it’s not a shirt, but rather my email signature, or the tagline under my username or portrait in any typical online forum.) Or the virtual realm of online news? Popular film? Academic discourse?
What was that Yglesias quote again? “Right is dominant in policy and cares about culture, left is dominant in culture and cares about policy”?
I get the feeling your comment doesn’t account for how well you’ve got culture locked up.
Note also that the problems I’d experience would (just) be lots of offended Christians, and a few people who dislike politics being brought into the workplace etc.
I think this doesn’t help your point. If you dare wear your tee, you end up offending lots of people you rarely interact with anyway. If someone dares go around with the other tagline, they risk the fate of Justine Sacco.
(Curiously, for a lot of righties, this is just peachy. They hang out in rural communities and small online enclaves. They’ll keep their RL friends just like their lefty counterparts do. But some of them will still look at Hollywood, mainstream media, and the government just as longingly as those lefties look at Wall Street, big energy, and… the government.)
[Colorado, sniping]
On the gripping hand, yeah. A lot of this is just that SMBC cartoon redux.
Maybe we should just say “SMBC #2939!” as shorthand for “look, we’re just bitter over the other side’s snipers”, so we can move the discussion along.
What was that Yglesias quote again? “Right is dominant in policy and cares about culture, left is dominant in culture and cares about policy”?
Except the right isn’t dominant in policy. Is government spending going down? is the regulatory burden being reduced? is power being devolved to the states? Other than gun control, policy is either not moving or moving left.
Except the right isn’t dominant in policy. Is government spending going down? is the regulatory burden being reduced? is power being devolved to the states? Other than gun control, policy is either not moving or moving left.
The right controls the Senate, controls the White House (after a fashion), and got five justices on SCOTUS, and might score a sixth if something happens to RBG. And if you define right = conservative = slow change as opposed to swift, then every time policy stops or moves only slowly left is a victory for the right. It ain’t all roses on the left.
Except the right isn’t dominant in policy. Is government spending going down? is the regulatory burden being reduced? is power being devolved to the states? Other than gun control, policy is either not moving or moving left.
I think this is most easily explained by the Bryan Caplan theory of the left/right. “The left hates markets; the right hates the left.”
On average, the right has a relatively weak belief in things like restraining government or reducing regulatory burden. After all, when they’re in power reducing those things would reduce their own power. And even “right-wing” voters are more motivated to punish the villain corporation or business owner of the week than they are to unleash more market competition. Or they’d really like to stick it to the left.
Individual business owners themselves are not necessarily concerned with being pro-free-market either, since some regulations help them.
@Paul Brinkley You certainly have a point there, but it’s one from a different sub-thread. This one’s about gratuitously picking on people, not about political favoritism, and some animals being more equal than others. You would, currently, and in the US specifically, have “no special rights for <specifically protected group>” interpreted as a desire to specifically persecute that group, even if your statement were still more explicitly a complaint about Affirmative Action, and would be responded to accordingly.
I’m not going to argue about the desirability or implications of that reaction this deep in a thread, where replies can’t be threaded. If you do want to discuss that, on its own or in contrast to the treatment of Christians qua Christians, start a new thread in any CW post, and I’ll probably rise to the bait and explore the question.
What Nancy appears to be claiming, rephrased in blue tribe argot, would microaggressions etc. being directed at vocal or obvious Christians.
Your (implicit) claim is either about political correctness (taboo speech) or about favoritism directed against you. (I’ll know which only if you start that thread.) Different topic, and more serious, at least on the latter interpretation.
“What Nancy appears to be claiming, rephrased in blue tribe argot, would microaggressions etc. being directed at vocal or obvious Christians.”
Sort of. What I was thinking about was generic snark aimed at Christians in general, and Christians who want to keep the peace and/or don’t want to be involved in sticky situations need to not say anything about it.
What I was thinking about was generic snark aimed at Christians in general, and Christians who want to keep the peace and/or don’t want to be involved in sticky situations need to not say anything about it.
Ye olde “let’s all agree that those people are outgroup”, addressed to a group containing some of “those people”, with or without the speaker knowing this, or knowing whom.
That used to be the normal experience of gays and others of deviant sexuality, and it it sucked then and sucks for Christians now.
Where I hang out, it’s less common than the othering of gays earlier in my life/career, but I certainly believe you.
The frequency would have to get pretty high for me to get overly concerned, with Christianity not being a valid (legal) cause for someone to lose their job, and essentially never attracting violence (“gay bashing”) – making it much much easier/safer for a Christian to be “out” than it was for a gay person. (Of course this is in the US or Canada.)
I currently experience frequent “microaggressions” (if you want to call them that), in the form of people basically insisting that extroversion is required for happiness, particularly in old age. It’s annoying, and can be depressing – what if they are right, after all – but I’m raising it here only as an example of the random st0ff that everyone gets hit by. An awful lot of people see the world from their own viewpoint, and presume/insist in spite of evidence that everyone else should too.
But I may be being systematically unfair to Christians. I’ve witnessed or been the target of too much bad stuff explicitly motivated by Christianity – even if other Christians would, and sometimes did, call the perpetrators heretical. So I probably can’t evaluate evidence dispassionately. (And this even though 2 of my grandparents were perfectly good/nice Christians, one of them being fairly devout.)
An ideology lives and dies based on how powerful it is perceived and how serious everyone seem to take it. Some ideologies anyway.
If you blasphemize, you show that you don’t take religion sufficiently seriously, you are the enemy. Same with some secular ideologies. If Church tells you to be offended when someone says Damn it or listens to heavy metal, you better be. When someone authoritative enough tells you to be offended at blackface, OK symbol or frogs, you better be as well.
It seems pretty arbitrary because those things alone are powerless, but they can become a symbol of defiance. And defiance leads to satanism. Or racism. Or something.
I agree with the points in the last paragraph. It’s a complex judgment based on the intent of both parties and culture.
If I think you are offended as a weapon to try and win some policy battle, I’m inclined to judge against you. Likewise if you seem to be cultivating particular sensitivity.
On the other hand, if there isn’t really a larger point behind the action, it’s one that is less presumption of innocence. And then, how reasonable is the action in the dominant culture? Wearing a dress of anther culture, say, is pretty different than burning a book. It’s usually an honor to have something named after you, so I don’t think having a team named “the Braves” should be offensive–but “Redskins” probably is a more legitimate grievance, because we don’t usually refer to people by physical characteristics.
Eh, you know, I think you’re right. You can get away with saying “black” too.
And it’s not lumping distinct ethnicities together that is the problem, because white–and black–do that too.
I guess it’s more context, though, since I’m sure a baseball team named “the blacks” would go over poorly.
Maybe it has nothing to do with the name, but the exaggerated iconography? But we’ve still got “fightin’ Irish” don’t we?
Yeah, the rules are hard to articulate.
Best to error on the side of not taking or giving offense.
Best to error on the side of not taking or giving offense.
If that were a general rule, it would be reasonable.
But I doubt that people will stop referring to the War of Northern Aggression as a just war in my presence. 😉
In all seriousness, there are definitely social standards about what is a “reasonable” offense to take and what isn’t. While we should definitely be charitable to people, especially in meatspace, having the ability to control those boundaries is powerful.
The wikipedia page is unsure of where the moniker “Fighting Irish” came from, but it sounds as though it may have been started by Irish people who were affiliated with Notre Dame referring to themselves. The rules are hard to articulate, but I think one thing that’s generally agreed on is that people have a lot more leeway in what they say about themselves and their in-group. “The Redskins” would probably get a lot less flak if it was actually a team of Native Americans referring to themselves.
That said, I didn’t necessarily take away that David Webb was offended at being called white, but rather that he implication that someone with his life history obviously had to be white (and even then “offended” may be too strong a word).
Looking at the bigger picture, he has a bigger point than might otherwise be apparent. I mean: Obama was President for two terms. Assuming that black people can’t achieve pretty much anything they put their minds to after that seems… I don’t know… a bit racist, actually.
The argument is that offence (as used here) is about a slight on somebody’s (or some group’s) honour. That makes being offended on a group’s behalf makes slightly more sense – you’re trying to prevent a certain group from being disrespected rather than share some possible emotional response.
Treating certain groups differently also makes more sense under this interpretation i.e. people [leftists] are trying to prevent groups they think are disadvantaged from being disrespected. Christians aren’t disadvantaged now or in recent history (according to common/standard thinking on the left; I’m not interested in arguing whether this is actually the case) and so this disrespect is less important.
This is a hard one. It can be a very good thing to amplify the voices of people who are not being heard, or (sometimes) use your own relative privilege to point out harm being done to people who don’t feel safe enough to speak out. But it can be a pretty terrible thing when self-appointed allies decide what other people need/want, and insist on giving it to them.
And emotional harms are much harder to judge than more tangible issues. One person’s terrible interaction, leaving them depressed and suicidal, is barely noticed by another person of similar objective circumstances. In general, it’s not a great idea to do things especially likely to cause emotional harms, but at some point demanding that others walk on eggshells starts doing more harm than the behaviours you are trying to prevent.
The internet doesn’t help, bringing together people who e.g. use a particular Anglo-Saxon (sic) word every second sentence, with people who regard the use of that word even once as putting the speaker permanantly beyond the pale.
AFAICT, almost all Twitter outrage storms these days cause a lot more harm than any benefits they may provide. OTOH, I’m happy with e.g. a manager interrupting people who talk over other people, saying things like “I’d like to hear what < person you interrupted > had to say”, whether or not this is the more common pattern of a male talking over a female. I’m quite OK with people not laughing at – and even drawing negative attention to – jokes based on treating some specific group as stupid (etc.), particularly when there are likely to be people of the target group present.
Bottom line – we could all use a lot more tolerance, and a lot less offence, and the more power someone has, the more true that is.
OTOH, if you want to build yourself an echo chamber, taking offence at anything and everything is one way to get a group consisting only of clones. And with the internet being what it is, you have a much better chance of finding people who agree with you on all your pet peeves – or who are willing to keep silent where they disagree – than you would in person, where you might find yourself preaching to a non-existent choir, unless you can bribe people to be your supporters.
The straightforward steelman, I think, goes like this:
1. Everyone agrees that respect for people is important.
2. Respect is not, however, universally due.
3. Ergo, rules for when respect is and is not due are important.
4. Since those rules are important, they require enforcement (assuming they are fair, etc.)
5. That enforcement must be universal, given the above.
6. Ergo, if you see someone hurt by a rule violation, you must try to defend them. QED.
Because of #6, people will say offense to one of us is offense to all of us. More accurately, I think one is not offended on behalf of others, so much as they’re offended by someone breaking the rules for respsect.
However, nowadays, the weak point is #4 – many people dispute whether the rules are fair. This is important, since if the rules aren’t fair, there will exist people with insufficient incentive to follow them. Anyone focused on #6 is going to be disappointed if they ignore #4.
And so we have this rule where you can punch up, which people at the top have no incentive to follow.
I can carry this further. There are meta-rules for figuring out fair rules, and one of them is “don’t abuse the rules”. Rules have a spirit, and if you break that spirit while following the letter, you’re in arguably even bigger trouble than if you’d just broken the letter, because you’re acting willfully.
And so we have people pattern-matching ordinary acts to acts previously ruled offensive, uncovering vast amusement parks of offense. Some of that is willful, some lazy. But they remember to be “offended on behalf” and keep going.
And so, incidentally, we also have people who will step right up to the brink of offending, say, Christians, without being outright offensive. And we have people who will think “ohh, they’re devout” and just stop inviting them to dinner, or offering to watch their kids… muttering that much more when they complain about something… passing over them for promotion because they’re “not a good fit”…
Which is not to say everyone ought to be required to invite devout Christians over for dinner. Rather, it’s that everyone ought to not mutter evasively about disrespect, or offense – or their fashionable synonym, “microaggression” – when it happens to anyone else. (But it’s probably worth noting there’s a rift there, and asking how badly everyone really wants that rift to close.)
Meanwhile, the respect system is still there, as a reminder to give fellow humans the kindness that encourages reciprocation and good humor. To violate its norms is to invite its collapse.
Politeness norms are great, but they should be universal, right? If you know something offends me, even if you think it’s a silly thing to be offended by, it’s polite to avoid it in my presence. What we’re talking about is a set of norms in which offending some people is very bad, and offending others is a positive good.
When someone wears blackface, people will respond with offense even if they’re white because they’ve learned that it is offensive. When a gay couple kisses in public, the same people will be offended at the old man who is visibly upset by this (by the standards he grew up with) highly offensive display. There’s something going on here, but it’s not politeness.
Politeness norms are great, but they should be universal, right? If you know something offends me, even if you think it’s a silly thing to be offended by, it’s polite to avoid it in my presence. What we’re talking about is a set of norms in which offending some people is very bad, and offending others is a positive good.
Agreed. But in turn, we’re assuming that your offense is in good faith. We both recognize that it’s possible to feign offense and control a great deal of behavior that way, and so we recognize the need (in a Schelling sense) to avoid even the appearance of doing so. Offense has to be based on trust, which is fragile, and therefore requires care.
This may mean sacrificing some offenses. If I think your tie looks tacky or your style of naming variables in your code offends my sense of elegance, it’s better for me to just suck it up than to try to make you change, risk mistrust, and then deal with you saying you’re offended by my combover, say. Or, it may mean that I have to make sure it comes across as an “effortoffense”, and be prepared to explain the nature of offense in convincing detail. Bonus points if it sounds like natural conversation; the presumption is that we’re trying to be friendly, not curt. (Sorry to any Curts out there.)
So:
When someone wears blackface, people will respond with offense even if they’re white because they’ve learned that it is offensive.
This is learned, and can only be argued by authority, so I think they need to suck it up. (If, OTOH, they’d learned the history of minstrel shows well enough to grok the offense, then they can put effort in and it works.)
When a gay couple kisses in public, the same people will be offended at the old man who is visibly upset by this (by the standards he grew up with) highly offensive display.
Some people are bothered by PDAs; if they are, they can probably be convincing about it, but they can probably just turn the other way, too, unless it’s noisy. So, case by case. Offense at the old man’s offense might be offense at the old man being careless with trust, again, depending on the case.
To some extent, negotiating offense is just going to feel weird, because it’ll ground out as a rational discussion about irrational gut feelings. OTOH, if we SSCerati succeed in our goal to convert the world into ratbots, we will also solve the problem of offense, and can turn to more productive tasks like paperclip making.
Meanwhile, the respect system is still there, as a reminder to give fellow humans the kindness that encourages reciprocation and good humor. To violate its norms is to invite its collapse.
Given that it is not in fact reciprocal, let it collapse.
There is no such thing as something that is offensive whether that be word or deed. Offended describes how someone behaves who chooses to act as if they were offended. A person will act as if they were offended whenever they think it is in their best interest to do so. It is their choice. We should generally ignore them. It is not any of our business. If we choose to act offended on their behalf it is to promote ourselves. These are all good reasons to not have friends. It is all way too much trouble. I do not get offended because I never choose to. I cannot imagine acting offended on behalf of someone else. These are all good reasons to have no friends. Surely I can find something to do besides talk. What a waste of time!
I think your examples can be (mostly) divided into three primary categories:
1. A direct attack on the offendee’s group. The offender intentionally insults a group and/or desecrates their symbols. This includes a, b, d, and i; and e or f if the offender intended to mock or denigrate.
2. An unintentionally poorly-taken reference to the offendee’s group. The offender says, does, or names something in reference to a particular group that they think is fine, but at least some members of the group find it offensive. This includes e, f, and g if the terms or aspects of culture were intended to be used neutrally or respectfully.
3. The “offenders” are simply trying to live their life and make no particular reference to the “offendees”. This includes examples c and h–saying “I believe LGBT folks have the right to marry and to be addressed with the pronouns of their choice!” doesn’t mean “I hate conservative Christians!”, though it does necessarily imply “I fundamentally disagree with conservative Christians on an important issue.”
Obviously, I think that the examples in category 1 warrant the most direct and/or secondhand offense, while category 3 warrants virtually none at all.
As I bystander, I think the proper response to category 2 is usually “woah dude, I know you didn’t mean it like that, but some people find that term/costume/name really offensive.” Hopefully the slight was truly just a misunderstanding and can be resolved with minimal drama. However, “screw that, I think I’m being perfectly respectful and it’s only a tiny nonrepresentative group who are acting offended” is sometimes a valid response.
Category 1 is in some ways thornier, since the intent clearly is to offend. Here, if you get involved at all you’re going to have to explicitly choose a side, e.g: Do you care more about the reverent treatment of religious symbols, or about comedians’ ability to criticize and mock religion? Once you’ve chosen, chastise and support accordingly.
My five year old (kindergarten) is showing a lot of the classic signs and several of the education folks at school have expressed concern. I’m being told to wait until the end of kindergarten for evaluation.
I don’t really see the point to this. If therapy or medication can benefit, why wait? Do they expect him to grow out of it? What’s the concern?
My biggest concern is that, due to some of his struggles, either he or the school will form the notion that he is “bad student” and that this will hamper his further development.
My biggest concern is that, due to some of his struggles, either he or the school will form the notion that he is “bad student” and that this will hamper his further development.
Unless your school is very small I would not worry about this at the kindergarten level. I probably wouldn’t worry too much about it even if it is a small school. People love a redemption story, so if it turns out the kid does have ADHD and in a year or two he gets proper treatment/medication and he turns into an excellent, attentive student that might even be better than if he had been a good student all along.
As far as “why wait”, 5 years old seems awfully early to start treatment. Doesn’t basically every 5 year old show some signs of ADHD?
**Disclaimer about my opinions: Obviously I am not a doctor. I did receive an ADD diagnosis sometime around the 7th grade. Took Concerta for about a year, hated it and never took any medication again. 2 degrees (admittedly both are undergrad, but whatever) later I think I’m doing alright. I’m not opposed medication for ADHD in principal, but I believe it is massively overdiagnosed and that even for legitimate diagnoses it the medication is over prescribed.
I started on ADD meds during the summer before 1st grade. I still remember the first day I took them. I had been struggling to read before, but about 15 minutes after the first pill, I sat down and read through a stack of books. I’ve been on some form of medication (first ritalin/concerta, now modafinil) ever since. No notable long-term effects from starting that early.
Not sure what to say over starting now vs starting later. I’d suspect that you won’t get pigeonholed too much, although it depends on exactly how it manifests. Mine was very much of the “staring into space” variety back then, and I was smart enough to get my work done, so I didn’t have trouble in kindergarten. It later became more outwardly weird, but I suspect that was rebound/withdrawal from the ritalin based on how I am on modafinil.
My brother has ADHD. A specialized tutor helped him a lot with reading and math skills, but that started in (I believe) second or third grade; prior to that there isn;t really a lot of actual learning that’s done in school, as opposed to socialization and daycare. Similarly, many ADHD medications do weird things to brain or metabolic function; best to put that off as long as possible in a growing child. I forget which medication he ended up taking, but it seriously suppressed his appetite – he ended up very underweight and had to follow a diet plan.
Also, mental health care generally, and particularly childhood ADHD, is really, really, really, unimaginably bad at both the specificity and sensitivity parts of accurate diagnosis; Scott has a post on this somewhere, I believe.
For one, it’s because of the way kids develop. Every kid under 5 would meet diagnostic criteria for ADHD – according to the instruments used to diagnose ADHD, their attention span is “low” and they are “impulsive and hyperactive.” Between 5 and 6, you can diagnose ADHD but it should be done carefully. So, your kiddo is right at the cusp of being able to be evaluated.
For two, it’s because stimulants will improve any person’s degree of focus, whether they have ADHD or not. So, even if your kiddo has improved focus with atimulants, it doesn’t mean they have ADHD.
You know your child better than anyone. But as a bystander, I would want to reassure you that even if your kiddo has ADHD and goes untreated – and even struggles – for a while, they’ll still be okay. Medicines help. And when that happens, they will see that they’re not “bad,” they just benefit from certain treatments.
So it’s worth taking this step by step and not rushing. Talk to your pediatrician, maybe initiate an assessment, see what it tells you, and go from there. Another option is psychoeducational testing – the school is required to provide this upon request, at no charge.
Thanks for your thoughtful response. To be clear, I am definitely not rushing in to anything, this is a long going conversation of several years. I am trying to avoid being irrational in either direction: “drugs are bad! it’s just a phase” or “he’ll be hopelessly left behind if he’s not doing calculus by the end of kindergarten.”
That being said, tt is not at all clear to me how your response speaks in favor of delaying treatment/diagnosis.
If medication will help my child focus and be happier and more comfortable in school, but he doesn’t “have ADHD,” why is that a problem?
If he is going to “grow out of it,” he’ll do so whether he takes medication now or not, but he’ll have been happier throughout.
Is there a specific answer to this question, other than vague assertions that we shouldn’t medicate if we don’t have to?
First and foremost : see your pediatrician. It seems like you have been really worried. In order for someone to help you, they would first have to ask you a lot of questions. However innocuous those questions could be, it’s not appropriate for random strangers to ask you those questions… or to know their answers!
That said, he may be more focused on stimulants, but it’s not clear that he will certainly be happier. Kids who take stimulants may have appetite suppression. They may have headaches. And they may exhibit less personality – the drugs work by turning up the “executive functions” dial, and sometimes people are so hyperexecutive that they get kind of robotic. People don’t like that and they sometimes prefer the ADHD.
These adverse effects may be acceptable if the drug helps the kid recover from a deficit. IE, there is what you call a “therapeutic balance” : the good effects outweigh the bad effects in a way that A. is meaningfully good to the patient, and B. allows their physician to make correct predictions. So if they have a diagnosis of ADHD, we can predict they will have trouble, and we can treat to reduce the probability of that trouble, which is what the patient wants.
If there is no such deficit… then there isn’t a reasonable, non-Faustian way to determine that therapeutic balance. IE, you can balance out “trouble from disease” and “trouble from medicine,” but you can’t balance out “doing fine” and “trouble from medicine.”
The latter case (fine vs adverse-effect-but-better-than-fine) is no longer a medical problem. It’s an engineering problem. Nootropics are a different kind of gamble.
And they may exhibit less personality – the drugs work by turning up the “executive functions” dial, and sometimes people are so hyperexecutive that they get kind of robotic. People don’t like that and they sometimes prefer the ADHD.
This was pretty much exactly why I discontinued when I was younger.
Thank you – this was a very cogent response and makes a lot of sense. I am asking random strangers for perspective, not for an answer. Of course I will consult with our pediatrician and other relevant health professionals.
What touched off my initial question is that I was specifically advised not to evaluate until the end of the school year (coinciding with his 6th birthday).
My takeaway from this is: 1) I should not avoid evaluation, because evaluation and treatment may help. I now understand where the advice to avoid evaluation is coming from, so that helps me understand why its not for me.
2) I should be very aware that a diagnosis (and therefore prescribed treatment) may not be right at this age (or any age).
3) If medication is prescribed, I should be extremely watchful with respect to therapeutic balance.
Not a doctor, but as a parent I’ve read things out on the internet about it. It seems like, in the US at least, they tend to wait til 7 to diagnose it.
One example that may lead a normal child to get diagnosed: boys mature slower than girls, and younger kids struggle more than older ones with sitting quietly and focusing. Thus, yours is a December boy, evaluated against the standard set by almost entirely January girls in your child’s school cohort, he will seem behind. (Obviously this is extreme, but you get the idea.)
I don’t think it is a problem to get evaluated, as LONG as you don’t let the school pressure you into starting medication or placing him into “Special Ed” – an abyss from which it seems hard to climb once there.
Also, luckily, it seems like getting evaluated and diagnosed does not equal medication: there’s all sorts of literature and providers out there who look at medication as last resort, and will recommend more physical exercise, reducing screen time, instituting routines, reducing sugar, delaying school by a year (i.e., start in 1st grade instead of Kindergarten) before prescribing the meds.
There doesn’t seem to be any evidence that ADHD meds can do more harm to kids rather than adults. There is much greater risk aversion as far as giving meds to children, which accounts for the reluctance, but this doesn’t seem too rational. On the other hand, the benefits are also essentially non-existent. Unless you are planning on tiger-parenting your kids, any head start they get now isn’t going to last to adulthood, just as the official program Head Start does not last to adulthood. And you’ll have to deal with paying for treatment and medication and dealing with all the bureaucracy, so ultimately I’d say no, don’t do it. See this if you have great trust in the “education folks at school:”
It is not clear that ADHD is an innate disorder rather than being created by the school system itself. There was a Harvard study recently that claimed a 30% increase in ADHD diagnosis by birth month across the enrollment date (ie the kids born in the last month possible while being in that class had 30% higher diagnosis rates than the kids born in the first month), which would be pretty damning on its own.
Additionally ADHD diagnosis has been increasing for the past 20 years, and at a fairly linear rate (which is not what you expect if the increases are from refining diagnosis and catching more marginal cases). This is correlated with an increase in schoolwork for young kids and pushing learning back earlier and earlier.
My (layman, but reasonably informed and highly interested as a home-schooler) opinion is that schools currently are actively preventing natural development paths and these are causing significant issues for many kids. Focus and attention should be viewed as skills to be learned and not paper over with medication (except as a last resort).
My (layman, but reasonably informed and highly interested as a home-schooler) opinion is that schools currently are actively preventing natural development paths and these are causing significant issues for many kids. Focus and attention should be viewed as skills to be learned and not paper over with medication
This, if accurate, represents the best argument I’ve seen to avoid evaluation/diagnosis at a young age. If I understand correctly, you are suggesting “ability to pay attention” is more of a trainable skill than is accepted by the current paradigm. This suggests that medicating for ADHD may actively prevent the development of focus skills because reliance on the medication renders such skill development unnecessary.
Is there any research to support this proposition?
I would say that another possibility is that even if it isn’t so much “learnable” (it probably is to at least some extent for the typical person) it is probably a part of childhood/brain development. Given that there are plenty of other areas where we don’t expect 5 year olds to be fully developed, I’m not sure why “attention span” or various other ADHD metrics would be one of them. Some of those kids are maybe just a year or two behind. The problem with starting medication at that age is…you never figure out if they were just going to grow out of it.
I guess every couple years you could stop the medication for a while and see if they can still hack it (having grown out of it) but at that point it would be a little surprising if they could since they’ve never had to try to operate without the medication boost.
That said, I don’t think you need to avoid evaluation at that age, just to be wary of immediately jumping to medication or of assuming that any resulting diagnosis is a permanent state for a 5 year old developing child.
I would say that focus and attention have aspects like weight, where there is both a strong natural tendency plus the ability to interfere for a different result (within limits).
Thinking about and rereading what you wrote I wanted to clarify something by way of example:
There is research from the 70s (I don’t know if it has held up or not, just an example) that claimed early reading and having to focus on small words on a page caused vision problems in kids which required glasses to correct them. There are a couple of possibilities assuming the first part is true.
1. The damage was more or less permanent and glasses were pretty much the way to go.
2. The damage would be reversed with time without glasses.
3. The damage would be reversed with time without glasses, but only if the kids stopped reading for a long enough period.
I don’t have an opinion yet on ADHD and if it requires medication etc once it is inflicted, but I want to be clear- ADHD as a diagnosis came with other behavioral issues beyond a lack of focus for many kids. I think that the school system is actively causing damage to kids who are pushed beyond their developmental level to much, not simply that ADHD is a descriptive term for how the kids would behave in a better environment.
If a child is showing signs of ADHD it might be any of
1. Its just an age thing and they will grow out of it with time.
2. Its the early stages of ADHD caused by something other than the school system.
3. Its the early stages of ADHD caused by the school system.
In the case of #3 just waiting and observing would likely not have the desired effect (or might but at an unacceptable rate, lots of variables), but it could be that at some point ADHD medication is the best option, like glasses would be in the above example.
Focus and attention should be viewed as skills to be learned and not paper over with medication (except as a last resort).
I agree with everything but this conclusion.
We agree that schools are putting kids into a situation that they’re not adapted for. We agree that people can learn to function in unpleasant environments.
But this doesn’t imply that there’s any virtue in learning to tolerate unpleasantness for unpleasantness’ sake. Often, it’s simpler to remove the unpleasantness.
For instance, my water-heater died. I’m sure I could learn to endure cold-showers. But I’ll just fix the water heater.
Similarly, now that I’m mature, I CAN endure long morning meetings without coffee. But why would I choose to do that?
My disagreement here is that your water heater is static, more or less it works or doesn’t work, and fixing it doesn’t impede its growth (because it has none). The correct option is to do your best to improve the environment for kids rather than keep them in an unhealthy environment and give them medication so that they don’t notice it.
The doctor prescribed a dose that was — in retrospect — unreasonably high. This became a problem because: the doctor was an authority figure, I tried to be agreeable in general, and my parent had just finished explaining how I was a broken disappointment.
The result was that I didn’t advocate for myself and so spent several years on a dose of ADHD medication that was fairly unpleasant and just accepted the side-effects as normal. If I’d spoken up, I suspect my treatment could have been modified to be unpleasant.
This is not an argument for or against diagnosing early.
But, if you’re going to put your 6-year-old into ANY sort of long-term treatment, you have a an duty to be really, really, really proactive about questions like “how does this medication make you feel?” and “no, really, how does the medication feel?”
(Hmm, the article on wiki doesn’t give justice to him, in the way that it doesn’t paint accurate picture of his findings, limiting itself to mostly sexual abuse. His findings were also about even more widespread physical abuse)
The book (or the reviewer) seems to make the classic mistake where when A can cause B, they conclude that B means that there is A. They even go so far to argue that learning is a sign of trauma!
When being human is equated to being traumatized, then the solution to remove trauma is…
What would a society which made major, effective efforts to minimize trauma look like?
Mass graves or dehumanizing people by wireheading.
Many believe that moderate trauma is necessary for growth and the absence of trauma is very damaging, like too few pathogens can cause lots of auto-immune disease.
Can you have love & friendship if you try to eliminate even moderate trauma?
I’m not saying that the problem isn’t serious. I’m arguing that presenting completely normal human behavior and experiences as severe trauma is a very bad idea.
By Sheckley (title forgotten): Everything in life depends on your mental health score. Showing anger will lower your score. Eventually, he can’t pass for calm any longer and he’s imprisoned for life in VR. Faint memory– there’s some looming threat, and anyone who’s aggressive enough to face it is imprisoned.
By Tom Purdom (title forgotten): There’s psychotherapy where actually works. It’s so expensive people are wrecking themselves trying get enough money.
By Margaret St. Clair (Rations of Tantalus/The Rage): People are required to take tranquilizers to prevent fits of rage, and the viewpoint character can’t get quite enough of the tranquilizer. It turns out that the tranquilizer causes the fits of rage, and the other part of the problem is not being allowed to show normal amounts of desire and frustration.
It is my understanding that dollar cost averaging is not meant to tell you over what time to invest a specific pile of cash, but to tell you that there are significant benefits to continuous regular investment, regardless of where the market is at any given time.
Good question. With the money in hand, and not earning much of a return, there’s a bit of a tradeoff. You don’t want to invest it all at a market peak, but you don’t want to stay out forever. It looks to me like we’re currently having scary corrections every couple of years, and a recession or crash approximately once a decade. I’d use those numbers to decide, and probably err on the side of shorter – so maybe 1-2 tyears. But I’m just a random casual investor, not any kind of expert.
With insufficient diversification or other portfolio rebalancing, I’d move faster, because this is a regular event. My ideal for stock from the company I work for (stock plan or RSUs), is to just routinely unload it at the point where its gains become “long term”. But if there’s a lot of money involved, I might sell a chunk of stock a week until I’m rebalanced, rather than all of it at once. (I also know the company cycle – there’s usually a dip twice a year right after the stock plan delivers stock to employees, because of immediate profit taking, so I try to sell the month before.) And I’m sloppy – I never seem to manage to rebalance on time.
Well, I don’t simply want to maximize the expected value of my investments. I want to maximize the expected utility of my investments, which is, uh, proportional to the logarithm of the value…
Haha, just kidding, the expected utility is determined mostly by whether I invest right before a crash and get totally pissed about that.
The standard defense: If your timeline is long enough for the money to be in stocks, crashes don’t matter, just wait them out and eventually you will be better off.
There’s probably not really a satisfactory answer. I have seen many people suggest half in a lump sum and half in installments over a year. But on average 100% lump-sum-today will outperform, at slightly higher variance and larger risk of regret. Time in market, and all that.
It’s a well-known fact in politics that the other side lies and my side is honest. Left-wingers believe the right-wing politicians and press lie, and right-wingers believe the left-wing politicians and press lie. I am a left-winger and, surprise surprise, I believe right-wing politicians and press lie.
But I think this is actually true, and not just a product of my bias, at least here in the UK. Here is a summary of violations of IPSO (press regulator) rules by various newspapers in the UK. The second plot in that link shows specifically inaccuracy rulings. All of those papers are right-wing with the exception of the Daily Mirror, a left-wing tabloid paper. The Daily Telegraph and The Times are supposed to be serious newspapers, while all the others (including the Daily Mirror) are considered lower quality papers.
Apparently IPSO doesn’t cover the Guardian or the Independent, which are generally considered to be “serious” and left of centre. But here is an article in, um, the Guardian (ok, I know how this looks, but its data comes from an independent source) saying that the Guardian is the most trustworthy, accurate and reliable newspaper in the UK.
So is this just another manifestation of me believing that “my side” is better and ignoring (or being unaware of) evidence to the contrary? Or is it actually true that right-wing sources are more prone to lying to further their agenda than left-wing sources? Is it true in the UK but false elsewhere?
It’s not like the Guardian and the Independant talking out their arses is unheard-of.
And anyway, smart people don’t tell flat-out lies. They weave cherry-picked half-truths with speculation to create a narrative that nobody can poke a definite hole in yet doesn’t reflect what the real world is like.
And I’m talking about both sides here.
Or even entirely true things that are made important.
Mass shootings in the United States are a good example for the left. They are rare and statistically basically dangerless to the average American. Being killed by a rifle in the United States is less likely than being killed by a blunt object (club, etc).
Or immigrant killings, if you want to take a right-wing talking point. First generation immigrants are substantially less likely to murder than the average American.
But without telling a single lie, just by emphasizing every single mass shooting/immigrant killing, you can create a narrative that the United States is a shooting zone for the innocent/flooded with dangerous foreigners.
What a valuable perspective, thank you. I think sins of emphasis are one of the most important things to keep in mind when navigating today’s world.
Another phrasing of this is that OP may be objectively right about their claim, but using that metric as a measure for “news source goodness” is a subjective choice.
Note that this is essentially the same analysis that Chomsky has made about the media’s foreign policy coverage for many, many years (with the addition that he added that most of this selection process is unconscious).
I think it’s all mostly unconscious regardless of the topic at hand.
I think that’s why practically all the noninterventionist politicians or pundits in the U.S. tend to be considered whacko along some other random direction. The selection process for stories favoring “America should spread freedom and democracy” is just so strong that the only significant people challenging it are fringey weirdos. Tons of paleoconservatives, communists, libertarians, and anarachists would prefer a much more isolationist policy, but outside the fringed everyone is pretty pro-intervention.
Yeah, the cardiologists/Chinese robbers thing is definitely a huge issue. But I think it’s the case that the Mail, Sun etc tell flat-out lies in addition to that.
A common pattern I’ve seen is reporters who tell a story that follows some comfortable/desired narrative, and then either omit the contradictory facts that would undermine the narrative, or downplay them. NPR seems to leave them out or downplay them; the New York Times seems to like to stick them in the last couple paragraphs of the story.
There is no law of the universe that society matches reality at the mid point.
But there’s a couple of meta-problems.
Lets say we we both design a protocol that can be carried out by a soulless unfeeling automaton to examine data on the issue in some way.
One of us gets back the result and one of us the results say the left is worse, the other says the right.
Lets say our actual methods are equally “good/bad”
Now we want to let the world know about our findings.
What are the odds that one of us will find it harder to publish in a scientific journal or in the guardian?
I remember an old story about a researcher who found that one side of the political aisle showed on average more [negative personality trait].
They were getting lots of media attention and citations.
Then a minor flaw was found in their data analysis, left and right columns had been swapped, the exact same thing applied but to the other political persuasion. Suddenly the citations and mentions in news articles dried up.
Re-run this a few hundred times.
So imagine a hypothetical reader, they search the news and the literature. what do they find? probably a large dataset highly biased by the views of publishers and academia.
In no way is Guardian more reliable than Financial Times, so your source is, um, not trustworthy (it conveniently omits FT). All British online media except FT and BBC seem, from my continental perspective, blatantly biased toward their prefered political agenda, much more than serious American ones.
Yeah, good point; FT is pretty good quality and it’s a shame they were excluded. Maybe it’s because their readership is smaller? BBC these days is basically just another mouthpiece for the government.
FT is no more excluded than The Guardian. They chose not to sign up to IPSO.
Note that IPSO is itself a rival to IMPRESS, which is another regulatory agency that no big newspaper signed up for, but whose membership gives an exemption to the GDPR, because they have official approval.
So there are (political) layers here, where membership of these organisations may be an attempt to fight regulation and/or to get a competitive advantage over informal journalism.
I think AlesZiegler may have been referring to my second link, which listed the Sun, Mail, Times, Express, Telegraph, and Guardian. The source is OFCOM but I haven’t tried to chase down the numbers nor why these papers were chosen.
> IPSO is funded entirely by the shadowy Regulatory Funding Company (RFC) which is dominated by a handful of national and regional publishers.
> The RFC writes the rules which dictate what IPSO may or may not do, and (as then RFC chair Paul Vickers made clear to the House of Lords Communications committee) must approve any rule changes.
> IPSO’s rules are therefore written and controlled by the very newspapers it purports to regulate “independently”.
Also, bear in mind that IPSO does not independently audit newspapers at will, but it responds to complaints from the public.
If the progressive left are more likely to make complaints about inaccuracies in the right-wing press than vice versa (and I suggest that this is the case) then you will necessarily see more findings of inaccuracy against the right-wing press than the left but that is not necessarily reflective of reality.
If the progressive left are more likely to make complaints about inaccuracies in the right-wing press than vice versa (and I suggest that this is the case)
I think it probably is the case, but it would be the case if the progressive left was made up of disproportionately more fact-conscious people than the right. I suggest that this is the case.
I think it probably is the case, but it would be the case if the progressive left was made up of disproportionately more fact-conscious people than the right. I suggest that this is the case.
I would suggest that both the right and left have preferred policies and worldviews, and that both the right and left emphasize facts which support their policies and worldviews, and downplay or deny facts which do not.
What makes a person right-wing or left-wing is their temperament: do they value order over novelty, safety over adventure, etc…, not whether they are more “fact-conscious”. To the extent that fact-consciousness is a thing, I would expect it to be normally distributed across both the left and the right.
To paint your ideological enemies as allergic to facts is lazy and trite, and suggests to me that your exposure to them is done through an ideological filter.
To paint your ideological enemies as allergic to facts is lazy and trite, and suggests to me that your exposure to them is done through an ideological filter.
And yet there are successful movements in politics right now that are heavily based on the confident assertion of factual inaccuracies. I am opposed to such reprehensible dishonesty and so the charlatans in question are my “enemies” in some sense. Do you see the problem this poses? Any liar, however brazen, can ignore those who challenge them as “painting them as allergic to facts, which is lazy and trite”.
And yet there are successful movements in politics right now that are heavily based on the confident assertion of factual inaccuracies.
Most obviously the climate catastrophe movement, which confidently asserts negative implications of climate change enormously larger than the IPCC projections.
Two of my favorite quotes from an IPCC report:
Some low-lying developing countries and small island states are expected to face very high impacts that, in some cases, could have associated damage and adaptation costs of several percentage points of GDP.
With these recognized limitations, the incomplete estimates of global annual economic losses for additional temperature increases of ~2°C are between 0.2 and 2.0% of income … .
I think it probably is the case, but it would be the case if the progressive left was made up of disproportionately more fact-conscious people than the right. I suggest that this is the case.
Devil’s advocate:
This would also be the case if the right-wing was made up of disproportionately more fact-conscious people than the left, such that they were more likely to report problems in even right-leaning publications.
Indeed, you would EXPECT publications that catered to more fact-conscious people to get more complaints about factual inaccuracies than publications whose audiences care less about getting the facts right. And you would expect left-wing publications to mostly be patronized by a left-wing audience, and right-wing publications to mostly be patronized by a right-wing audience, so…
I have no idea if any of this is true, obviously, but that’s the point; it’s easy to construct a plausible story for any particular observation. Making up narratives that confirm your own biases is a trap, and it’s not a hard trap to walk in to, because the bait is very, very tempting.
It’s not particularly difficult to find an issue where one “side” is lying and the other is telling the truth. The problem is that there are lots of issues and each side is likely lying about at least some of them.
Additionally, the worst lies are always told using the truth- or rather, a portion of it, with the inconvenient bits omitted.
Yeah, we could back and forth on examples of lying but what’s the point? There is no grand arbiter of lies, so the question isn’t going to be resolved.
There is a difference between lying and telling falsehoods. Telling a falsehood that you sincerely believe is true, is honesty of the deceptive kind.
The other side is more honest than you* think, because they actually tend to believe things that you considers so obviously false that only total idiots could believe it. Since they seem capable of dressing themselves, people tend to conclude that the other side is being deceptive. Your own side is also more honest than the other side tends to think, because you honestly believe things that the other side considers so obviously false that only total idiots could believe it.
Both sides cherry pick evidence, typically believing that the evidence that supports the other side is poorer.
I could go on and on.
* No matter what your side is.
Apparently IPSO doesn’t cover the Guardian or the Independent, which are generally considered to be “serious” and left of centre.
The actual reason is that IPSO only regulates those who sign up for it, which The Guardian didn’t. I don’t understand why you would trust IPSO to tell you anything about how accurate newspapers are, when newspapers can just decide to not participate.
There are a ton of possible explanations of why right-wing newspapers top the rankings, other than that right-wing newspapers are more often wrong/deceptive, for example:
– Right-wing newspapers are more masochist or more interested in being correct, signing up and/or staying signed up even when they know they will face many corrections
– IPSO has a left-wing bias (most newspapers seem to be leftist, UK newspapers make the rules for IPSO, 1 + 1 = ?)
– Right-wing newspapers are less likely to correct stories without intervention by IPSO (most investigations seem based on complaints)
– Right-wing readers are less likely to complain than (a subset of) left-wing readers
– Left-wing media state things in a way that is equally deceptive, but not technically false (or the falsehood consists of a quote, which is not rebutted)
saying that the Guardian is the most trustworthy, accurate and reliable newspaper in the UK.
I was able to very quickly find an article where The Guardian proves its own statement to be a falsehood in the next paragraph of the same article (bold is mine):
When asked to provide evidence that mothers were making up abuse claims, she said she had personal experience and “submissions from people that this is the case”.
The claim is a prominent grievance among men’s rights groups, but has been widely discredited in multiple studies.
According to researcher Jess Hill, who has authored a book on domestic abuse called See What You Made Me Do, one of the most thorough studies on false abuse allegations from Canada found that non-custodial parents, usually fathers, made false complaints most frequently, accounting for 43% of the total, followed by neighbours and relatives at 19% and mothers at 14%.
I suspect that this mistake is because The Guardian didn’t listen to what Hanson actually said, but interpreted it as something very different based on a stereotype & then set out prove that stereotype false.
I followed The Guardian a bit in the past and they got called out for their biases and mistakes in the comments a lot, until they closed the comments for most (if not all by now) articles.
You make a lot of good points, and unfortunately I need to leave in two minutes so I don’t have time to try and respond properly, but I do want to disagree with your statement that most papers seem to be leftist. The Guardian and the Mirror are on the left, and some people make an argument that the Independent is as well, though if so it’s only very weakly, but all the other papers (pretty much) are on the right. The Mail, the Express, the Telegraph, the Times, the Sun, the Star… Maybe we could make an argument that the FT is near the centre, but definitely on the right of centre.
Are you talking about the papers that were features in the article you linked or the general newspaper landscape? In most Western countries, the general newspaper landscape seems to lean left relative to the populace.
Then again, the UK is very tabloidy, so they might be different.
I’ll say it again: media bias isn’t just in what a source says that might be false, but also in what it doesn’t say, that might be true.
(Some of this is touched on above. A source can and will leave out context, including context that will make their core story much less important than it’s reported to be.)
It’s a well-known fact in politics that the other side lies and my side is honest
Are you using the definition of “lie” that means only making explicit and provably false assertions of fact, or the definition of “lie” that means acting with malicious indifference to the truth such that other people end up believing falsehoods?
Because the left basically controls the journalism schools, along with the universities in general, and mostly doesn’t even bother to deny that any more. Journalism schools are where reporters learn how to present whatever view or narrative they are pushing while carefully avoiding explicit assertions of provably false fact. And how to edit newspapers so as to make sure individual reporters don’t slip up on that point. So they wind up pretty good at avoiding type-1 lies in their journalism.
The right gives the left no credit avoiding type-1 lies when they catch them in so very many type-2 lies. And they don’t bother as much avoiding type-1 lies themselves because A: they are at least consistent in not caring and B: they don’t have as much access to first-rate journalism graduates.
The amount of false stuff you’ll wind up believing if you uncritically read a “right-wing tabloid” is probably not too far off what you’d wind up with from a “serious left-leaning newspaper”, but with the right-wing tabloid it’s easier to apply critical thinking to catch the lies and easier to blame the right-wingers for blatantly lying to you rather than admit you were fooled by clever half-truths.
Why? They both produce the same undesirable end result, they both reflect the same malevolent intent, and the clever trickery is usually harder to catch.
Mostly, I think we give clever trickery a pass because we want to be able to use it ourselves without thinking poorly of ourselves. That’s not a good thing.
It’s also not good that we are producing a generation’s worth of people who, when caught telling type-2 lies, will be outraged and indignant about being treated as liars and will support each other in this outraged indignation. We’re replacing respect for the truth with respect for cleverness in deception.
Hard disagree. I think the vast majority of such cases are ones where the people involved are sincerely misleading themselves just as much as they are anyone else.
EDIT: Actually I also don’t think I agree with the first claim, now that I think about it. Believing things that are obviously and verifiably false seems worse to me than believing things which are misleading because taken out of context, or something along those lines.
We’re talking about people believing things that are objectively and verifiably false in both cases. The “misleading” statements, mislead people into believing something more than what was explicitly stated – that’s pretty much the definition of misleading – and that something more is objectively false even though the misleading statement was just fuzzy.
And if you put yourself forward as a journalist, then no, you don’t get a pass on that because you “sincerely believe” the false thing. You’re supposed to have been the one who figured out whether it was true or false, so you could tell the rest of us.
And really, if we find that you always very carefully stop short of explicitly stating that false thing while you keep “misleading” other people into believing it, then we’re going to be skeptical of the bit where this was allegedly a sincere mistake on your part, because how did you know exactly where to stop?
It seems to me that the logical extension of your argument is that good journalism is impossible.
There have been, in the entire history of the human race, precisely 0 people who are immune to cognitive biases about things they care about. That’s kind of the point of websites like the one we’re currently commenting on.
Reputable newspapers have practices in place to mitigate some sources of bias, such as printing no overt falsehoods, and trying to get quotations from opposing sources. Plenty of things still slip by. But it’s still better to have those practices than not.
There’s just as much bias, lies by omission, and so forth in right wing as left wing papers. But what you’ve argued is that there’s an ADDITIONAL thing in the right wing ones that isn’t there in the left: outright deliberate falsehoods. This means they’re worse. I can’t parse this any other way.
What we want from journalists and scientists is an honest and competent best-effort at getting to and reporting the truth. Their biases will screw them up sometimes, other times they will just flat make mistakes, but they should be making a serious attempt at learning the truth and conveying it accurately.
Management and funding sources and social pressure can all create an incentive for the journalists/scientists to fudge their answer or stop looking once they have what looks like a desired answer. But the best ones don’t like to do that, so mostly they just figure out what they’re not supposed to study and then go look into something else.
It seems to me that the logical extension of your argument is that good journalism is impossible.
Perfect journalism is impossible. Good journalism is quite possible.
Good journalism almost certainly requires that journalists not operate in ideological bubbles, left or right. It also requires that they not rely too much on the “you can’t prove this wasn’t an honest mistake” when their mistakes keep coming so close to the provable-misconduct line and always in the direction preferred within their bubble. This isn’t the journalism we have, at CNN or Fox News, the Washington Post or the Washington Examiner, or as near as I can tell at their British counterparts. I don’t think I am out of line in demanding better, and withholding trust until I see better.
Reputable newspapers have practices in place to mitigate some sources of bias, such as printing no overt falsehoods, and trying to get quotations from opposing sources. Plenty of things still slip by. But it’s still better to have those practices than not.
Bias can be thought of as the motivation to believe certain things. Since, as you say, “precisely 0 people […] are immune to cognitive biases about things they care about”, it ought not surprise you that if the motive to believe certain things is greater than the motive to be Tarskishly correct, then it’s great enough to color not just the object level content, but also even the practices to mitigate bias in that content.
How many times have you seen an article where “$opposingSide could not be reached for comment?” How hard do you think that journalist tried? Did they call ahead, schedule an interview for the following week, and sit down for half an hour or so like they did with the primary source? Or did they leave an email with a request for comment two hours before the story had go out the door? How do you know? How many times have you noticed the big sheet of laminated colored plastic between a newspaper’s news section and its opinions section, so that readers could not fail to notice they were venturing into non-factual claims? Or the unmissable switch from 10-pt Times-Roman font to 15-pt Comic Sans?
There’s just as much bias, lies by omission, and so forth in right wing as left wing papers. But what you’ve argued is that there’s an ADDITIONAL thing in the right wing ones that isn’t there in the left: outright deliberate falsehoods. This means they’re worse. I can’t parse this any other way.
Suppose I give you two buckets of water. One has normal looking water. The other has bits of dirt visible in it. Which one are you more likely to pour through a filter and let sit with an iodine tablet for half an hour before you drink it?
What if you later find out both were pulled from a stream teeming with harmful microbes? Which bucket is more likely to have you sitting at the latrine an hour later with a case of the runs? The one you filtered because it was obviously dirty, or the one you went ahead and drank because it looked clean and you were really thirsty?
It seems to me that the logical extension of your argument is that good journalism is impossible.
It’s about as impossible as good science. Which is to say, it’s as impossible as performing science in a way that never produces false results. Which is to say, as impossible as something that we don’t really think of as good science, but rather “impossibly perfect science”.
Good science, by contrast, still produces false results, but in the limit, results whose degree of falsity decreases. Good journalism can do the same. But that journalism is only good if the journalists are motivated to Tarskian levels of truth, more than to any other beliefs they might have.
And the best known way to do that is to make sure the journalist pool has people with diverse sets of prior beliefs, then let them do their thing, both normal reporting and bias mitigation, and then aggregate the whole thing using a trusted mediator, and expect results which might still be false, but in the limit, less false than any single source. Then you do that again the next day, and the next, so that that limit is more likely to work in your favor.
These days, that trusted mediator is going to have to be yourself, and even that doesn’t work unless you’re that devoted to Tarski, too.
I think it’s not better, because I think the point of a newspaper is TO be a reliable/credible source of information, not to NOT lie. Misdirection or lying still makes the newspaper less credible, and me less likely to take what it says seriously.
I am aware that this is not what most newspaper readers select for, but allow me my own preferences here 😛
I don’t know about your country, but in the UK, the left absolutely does not control the journalism schools. Journalists are heavily dominated by the privately educated. They’re upper-middle class (not to mention white and male). They went to school and university with the people who ended up being the politicians (and the bankers and the businessmen…). In fact they’re often the same people. Johnson was a journalist; Osborne is now an editor. They represent a rich, right-wing club and their journalism reflects that. There are a few exceptions, but only a few.
Also, unlike in the US, there is much less money on the left of British politics, which I think is a big contributor to the problem.
And don’t fall into the trap of thinking that this is “serious left-wing papers” vs. “right-wing tabloids”. The Times and the Telegraph are “serious” papers and the Mirror is not. I know there’s a popular idea around here that the left are the middle class elite and the right are the ordinary working people, and that might be true in the US, but I really want to push back against the idea that it’s true in the UK.
I have been warned never to be alone with a girl, because apparently false rape accusations are rampant in my country.
I have also been assured that large percentages of girls are seriously sexually assaulted, and a significant percentage are raped.
All this fairly recently.
While I am sure these things happen, people around me seem convinced that these are far, far more common than anything that seems realistic to me. I meet people from a lot of different social groups, and most people I know I’d have a hard time believing would do any of these things.
If the “rape is rampant” side is to be believed, at least a fifth of women get seriously sexually assaulted. Who is doing all this assualting?! Are a tiny portion of men assualting a lot of women? Are a huge chunk of men assualting women? Why?
If the “false accusations” group is right, all the same questions!!
The big question for me is: why can’t being a decent person be enough to let you trust others anymore?
So now I’m wondering if I’m just oblivious or there has been something pushing people to trust each other less recently.
I’m probably not in your country, but I seem to recall statistics from 40 years ago in the USA that had more than 50% of women experiencing at least one sexual assault in their lifetime, including assaults that did not succeed.
Other than that – if you hear about it a lot, your subconscious decides it’s common. That’s why people in the US are currently afraid to allow their children out of their sight – some random stranger is, they think, going to try to kidnap or assault them. (The actual statistics there suggest this is actually quite rare.)
I live in Israel (are people going to hunt me down now?)
50% seems unusually high [US or no] though since i’m male I’m not sure.
Unless sexual assault was defined as any unwanted sexual encounter.
This is far too long ago for me to find sources, but it’s well before a lot of recent attitude and definition changes. I’d guess it included attempted rape, but not e.g. butt pinching, groping etc. And it certainly would not have included lack of enthusiastic consent; that standard wasn’t yet current.
When has being a decent person ever been enough to let you trust others?
It’s pretty easy to find statistics to determine how common these two crimes are. The prevalence of false reporting of rape is approximately 2-10%. However, it’s worth noting that not every such false report involves an accusation against an actual person; some are entirely phantasmal. A more realistic rate of false reporting, based on the above link, would be around 2.5%. There were about 90,000 rapes reported in the US in 2015, of which ~97.5% were substantiated. For a male population of ~160 million, that means your chances of being the subject of a false report of rape in any given year are about 1 in 71,111. Based on an estimate from the National Safety Council, you’re about as likely to die in a fire in any given year.
The demography of actual incidences of rape is a lot more complex, but the “1 in 5 women experience sexual assault in their lifetimes” figure comes from a CDC study. Based on a brief study of these results, I’d guess the majority of these cases are “intimate partner” assaults.
It comes from a seriously flawed CDC study.
In 2015, there were about 90,000 rapes reported to law enforcement. Of those, less than two thousand resulted in a conviction. If you take the number of convictions, .0006% of the population is raped every year. If you presume every person who reported it was telling the truth and the prosecution just mucked it up (a very, very generous assumption), then the rate is less than .03%. (And false rape accusations are 0%.) If you assume only one out of three crimes is reported, you get a rate of .0009% (or 270,000).
Compounding the highest of those numbers and presuming each rape is to a unique person (which will make the number larger), 21 million people will be raped in a given (average) woman’s lifetime. This is a rate of 6.4%, or roughly one out of twenty. In contrast, if you believe that the justice system gets to determine whether a rape occurred, 156,000 rapes will occur over a given woman’s lifetime. Or a rate of .4%, or one out of two hundred and fifty. Note, this number includes men (and if you wish to expand beyond rape to all sexual assault, you get a larger proportion of male victims).
So the correct number is not ‘1 in 5 women will be raped’ but between ‘1 in 20 people’ to ‘1 in 250 people’. The way you get to ‘1 in 5’ is either by presuming less than one out of three crimes is reported or by counting all sex crimes as assault or both. (And then doing a bit of sophistry: men are victims of sex crime at pretty high rates but women are more likely to be raped. Yet somehow the number always becomes ‘1 in 5 women raped’ instead of ‘1 in 5 people are victims of sex crimes’. I’ve seen studies that count things like men getting groped as violence against women…)
You are correct that best evidence is that false rape statistics are somewhere in the 2-10% range. That doesn’t include cases the police dismiss as on their face non-credible or which are made to things like HR boards. Still, the rate is relatively low.
The question of which is more common is basically what you believe. If you believe that only convicted rapists are rapists then being falsely accused is as or more common than rape. (In fact, at the high end, it might be as much as five times more common.) If you believe the majority of rapes go unreported or unconvicted, then even at the high end (let’s say 20%: the high end of 10% and doubled for the ones that got dismissed etc) false rape accusations are significantly less likely than rape (though any individual accusation has about a one in five chance of being badly motivated in that scenario).
This is why it effectively serves as a wedge issue. Do you believe the criminal justice system works or doesn’t? If you do, it follows the low end of those numbers makes sense and false rape becomes more prevalent than rape. If you don’t, then false rape accusations might not even exist or are at least rare.
presuming less than one out of three crimes is reported
This doesn’t actually seem unreasonable to me, especially if the majority of sexual assaults take place within pre-existing relationships. Most of those aren’t going to be reported.The one out of three number comes from an actual estimate but there are lower or higher ones. I wouldn’t say the liberal position that rape is extremely highly underreported is unbelievable on its face. It is unfortunately pretty unfalsifiable (women don’t report rapes and lie on our surveys but they’re there!) so it leads to all sorts of wildly bad statistics.
What’s your estimate for the proportion of robberies or common assaults that are reported? How do you think that compares to the proportion for rape?
I’m not using my estimates at all. I’m not a criminologist but a statistician.
The one in three number comes from the Bureau of Justice. You also have RAINN (who have an incentive to portray rape as common) who estimate it as 38.4% are reported. Both agree rape is relatively underreported: the Bureau of Justice estimates a little under half of non-sexual assaults are reported while RAINN says about two thirds are.
@Erusian
I thought your argument was that the one in three figure was unrealistically low. But now you seem to be saying it’s reasonably accurate. Or are assuming that the RAINN figure is wrong?
I’m not assuming anything. My simple point is once you work the math then your assumptions lead to two different conclusions, which means it serves as a wedge issue.
If you’re referring to me calling the CDC study serious flawed, you missed that the one in three number still doesn’t get you to the one in five statistics often quoted. You need to presume significantly less are reported. I’ve found no credible studies that make a strong case for the number they’d need to reach it, either from the Justice Department or RAINN, which is fairly damning of that particular statistic.
No, that’s prevalence of accusations proven to be false “if there is a clear and credible admission [of falsehood] from the complainant, or strong evidential grounds.” You could just as easily say that “false accusations” are the majority of rapes if you consider every accused rapist not convicted of rape to be innocent. They also exclude cases of mistaken identity if the rape occurred but the wrong man was accused.
That only considers cases reported to the police, as Scott pointed out here. And do you not see the obvious problem with comparing the lifetime probability of one thing(being raped) to the per-year probability of another? Or with including everyone with a Y chromosome in the same pool of “individuals who could be falsely accused of rape?”
As I pointed out your numbers are wrong, but even taking them at face value, people take precautions to avoid dying by fire.
Sometimes those cases “proven to be false”, aren’t false:
https://www.propublica.org/article/false-rape-accusations-an-unbelievable-story
Note that Marie’s rape was included in the “false reports of rape” statistics for the year in question, and that for that year in question Lynnwood had about 4* times the percent of “false reports of rape” than the country did as a whole – possible evidence that their police department was underly scrupulous in determining the facts about rape reports.
Edit: * – “In the five years from 2008 to 2012, the department determined that 10 of 47 rapes reported to Lynnwood police were unfounded — 21.3 percent. That’s five times the national average of 4.3 percent for agencies covering similar-sized populations during that same period.”
Yes, and sometimes the convictions are false too. That’s noise in the categorization system; if the signal-to-noise ratio is small, we’ve got big problems.
And sometimes cases “proven to be true” aren’t true, what’s your point?
This is why we have tests for statistical significance.
My point is you should be careful with the word “proven”. That is all.
I’m used to seeing people use the word with respect to logic and rationality here on SSC, but that’s not the way it is used in law.
@anonymousskimmer
It’s strange that you decided to highlight the error in one direction only.
That 2-10% figure comes from a range of different studies with different criteria for “false reporting”. The most stringent requirements actually return false reporting levels well below 2%; the least stringent give results well over 10%. I feel that the lower end of the range gives the most reasonable answer, as not every case of false reporting involves an accusation against an actual person.
I actually did use the per-year probability of dying in a fire (1 in 118,051). Please read my sources before accusing me of misusing them.
And even with those precautions, they still have as great a probability of dying in a fire as being falsely accused of rape without any precautions.
Ditto for yourself, my comment was about comparing the annual probability of being falsely accused with the lifetime probability of being raped.
You’re assuming no men take any precautions, which does not seem right to me.
From the methodology section of your own link:
This is the _proven_ false allegation rate. The meta-analysis notes studies reporting numbers “from 1.5% to 90%”, and references a study with a 64% “unfounded” rate. This study they concluded showed a 10.3% “false” report rate, by excluding ” cases in which the police decided that the victim was an unsuitable witness, in which the police could not or did not produce corroborating evidence, in which the victim stopped cooperating with the investigation, or in which the investigating officers seemed to be prejudiced against the victim.”
So yes, they used studies with different criteria. But they didn’t use those studies’ conclusions; rather, they re-analyzed the data according to their own criteria (if they could).
The “2-10%” number is rape accusations demonstrated to be false. Most accusations result neither in a conviction nor a demonstration that the accusation was false. And there’s also not-insignificant categories like “the specifics of the accusation did not constitute a crime”.
+1 Thank you for pointing this out (again).
I don’t live in USA, and think assault in general is much rarer here, I generally feel safe around any countryman of mine
My daughter was asked to homecoming by a boy from a neighboring school that she knows only through inter-school activities. After accepting, she was warned off by no fewer than 3 friends, because the boy has a reputation for sexually assaulting / roofie-ing girls. So she decided not to attend homecoming with this boy, and told him so. Then she had two more friends come forward and affirm her decision – it was a ‘good call’, she was told. No one, however, told her who he had assaulted – only vague ‘heard it from a friend who heard it from a friend’ rumors. I don’t know whether this boy deserves the cloud that follows him or not. Perhaps he’s being bullied by another party or parties who have decided to make him a social outcast.
In any case, it seems likely that Purplehermann is worried (and maybe should be worried) about this sort of accusation as well. It’s much less risky for a false accuser than going to authorities, and perhaps more common than the the type of accusations covered by the study you link to.
“He seemed so normal.”
Being a decent person means that other people can trust you, not vice versa.
I mean, maybe. Certainly the story on a lot of police, or other abuse type things is that what’s changed isn’t the frequency (or if it has, it may even have gotten less common) but the ability to get proof, so the folks who otherwise would have said, ‘nah, lying criminals/sluts/children’ instead end up seeing themselves as betrayed by neighbors/authorities/friends, which does undercut trust.
If you were wrong about John, your coworker, who really was raping his daughter (all generic you’s, obviously), as the DNA evidence proved, then can you really trust your judgment about Jane, your neighbor?
“He seemed so normal.” I don’t think I’m falling for this failure mode, I don’t know anyone who is a (proven or actually accused) rapist, I have difficulty believing there are large quantities of women in my country doing something as horrible as falsely accusing men of rape, and only know one girl (maybe two) who has accused falsely-socially- and this particular person did not surprise me, but filing falsely would have.
Supposedly people can’t tell when others are depressed or even suicidal, but I have a good track record for noticing something is wrong. This makes me suspicious that i would totally miss any signs that rape or false accusations are as prevalent as is apparently believed
I’m sorry, you know no one who has been accused or convicted of rape and one, maybe two people who have made (you somehow know) false rape allegations, but you’re still taking seriously whoever is telling you to never be alone with a woman out of fear of a false accusation? Have you considered that this person may just be paranoid? Alternatively, given that you weren’t surprised, maybe just don’t be alone with people who you think would make such accusations, as you’re apparently very good at picking them up.
You may be better at noticing such things than I am, but alternatively, people may just not want to discuss such matters with you, especially if you’re friends with the person they would accuse, as is usually the case in social circles.
Or, maybe your social circles really aren’t infested with rapists, or false accusers. Not everywhere has to be, for it to be a major problem at a national level.
This distrust of people has been showing up recently from multiple people who don’t know each other. I am worried that the social fabric (for want of a better term) is deteriorating for some reason, and about lots of people being horrible more than my own well being, the warning was brought to show that some people are really worried about this and think it’s common enough to be a serious risk.
Eh, for decreasing social trust, I have no useful insights, especially for a country I’ve never been. I think there has been a general increase in cynicism through web culture more generally, which I think has negative impacts, but again, I’m only familiar with the areas I interact in.
But do they all know someone else who has been stirring stuff up? Directly or indirectly. Stirring up drama is pretty easy, and there’s no shortage of people who do it. For instance, consider this recent case in the US. One girl posts a notice in a girl’s bathroom that there’s a rapist in the school “AND YOU KNOW WHO IT IS”. Other girls post similar notes based on the first note. It’s assumed by various other students that one particular male student is the rapist. But it’s all meaningless drama; the original poster denies she meant the male student targeted, and the copycats had no idea.
Most rapes are committed by the ~2% of the male population who is sociopatic. I suppose it’s the same for false rape accusation, except that most of the perpetrators are women.
By whom were you so warned and assured, and why didn’t you ask these questions of the person doing that warning and assuring?
“I have been warned never to be alone with a girl, because apparently false rape accusations are rampant in my country.”
I don’t think the modal fear with this advice is false accusations of rape. Usually it’s “sexual misconduct.”(The scare quotes are because of the phrase’s vagueness.) I think most of us have witnessed co-workers who do not get along with one another. This problem has always existed and isn’t going to be magically fixed anytime soon. “Just be nice to people,” is always good advice, but you have to consider that sometimes people are not going to be nice back, and if certain variables align in a certain way you could be at a substantial disadvantage with no one, no lobby, no organization to have your back.
Most likely.
Crime in general is not uniformly distributed in population. Some people are way more violent than others. Some people are way more mentally disturbed than others. Some people have low self-control. Some people are psychopaths. Some people are in positions where it is easy for them to abuse others, either because they have some kind of protection, or because they have an access to many vulnerable potential victims.
I can’t speak with any confidence about the fraction of rapists in population, or about the fraction of rape victims, but it wouldn’t surprise me at all to learn that e.g. 1% or 2% or 5% of men would rape 10% or 20% or 50% of women.
(Ignoring all other forms of rape, to keep this debate simple.)
This topic is politically sensitive, because… let’s say that explaining things by “there are differences between people, so they act differently, duh” is frowned upon these days. And of course, the situation is more complicated: the boundary between e.g. violent and peaceful men is not sharp; people behave differently on different days because their situation or mood has changed; sometimes unusual situations happen; etc. So there are also rapes where the man is an otherwise decent guy, who did something “out of character” because his emotions got momentarily stronger than his self-control. And the whole spectrum in between.
So I could imagine the right answer to be something like “15% women raped by 1% of men (serial rapists), and 5% women raped by 5% of men (a date gone wrong)”. There are also repeated victims among women, etc.
Rape is very much a case of small number of perpetrators, large numbers of victims. DNA work on backlogs of rape kits turned up horrifiying numbers of repeat hits, interviews of convicted rapists, and anonymous surveys of large numbers of men.. – it is all the same story. North of 80 percent of all sexual assaults are the work of serial offenders who keep on victimizing people until they get too old to continue doing so or they finally wind up behind bars. The remainder are mostly cases of “everyone involved too drunk to consent to anything”, and a small smattering of one time offenders who found the actual experience did not match the fantasy and consequencely stopped. One time offenders are, to a first approximation, never convicted.
Going off cases of people who actually spent time behind bars for rapes they did not commit, there is one
and only really one way for this to happen to you.
Step one: Look suspicious. By which i mostly mean. “minority”
Step two: Be in the vicinity of a particularily gruesome rape.
Step three: Have a shitty local police department and a bad public defender.
Note that there are multiple definitions of sexual assault, ranging from rape to sexual behavior without consent to sexual behavior with consent that the person later regretted.
There are also multiple definitions of false accusations, ranging from malicious accusations intended to harm the accused to defensive lies to protect the accuser from blame* to non-true accusations.
* For example, a cheater can accuse the person they had sex with, to save their own relationship.
This is an appeal for help. I’m a huge fan of SSC and this community, so I figured this was the place to come for what I need.
I’m about to start interviewing candidates for two open headcount in a team, that I’ve recently become manager of. Most of the candidates have some relevant experience etc. but I’d like to recruit based on general intelligence.
I have some constraints:
I work in a mid-level role in a large company which means that I can’t start unilaterally handing out written tests, I have to rely on a 30-45 minute interview.
Any candidates I favor have to be interviewed by my superiors, and at a minimum any “weirdness” in the interview with me is likely to be fed back to my superiors then, so I’d like to avoid that.
Several candidates are coming from recruitment agents, and will likely be encouraged, post-interview, to share the questions I asked, so that they can be provided to other candidates, represented by the same agent. So I need to ask different questions to different candidates.
And finally I am a native English speaker, and I will conduct the interview in English, but none of the candidates are native English speakers, so i don’t want to accidentally test English proficiency instead of intelligence.
Given these constraints, what’s a good selection of different questions I can ask in an interview to determine who is unusually intelligent?
@ Danno28
I probably won’t have much useful to say, or the time to formulate it if I did, but it might help others if you answer the following questions:
– Can you elaborate on what “general intelligence” means here? Do you have anything more specific that you wish to select for?
– You say you can’t administer written exams, but are you able to ask technical questions that involve the candidate working through analysis, code, or whatever is required in your particular domain?
As an aside: I personally wouldn’t discount the value of testing a candidate’s proficiency in English. In my limited experience, even very technical jobs in North America benefit from someone with strong language skills. That said, please excuse any typos above.
Generally speaking, you should be asking these people intellectually demanding questions within the domain they will be working in. If they are not expected to have any specific skills, I would try asking them to copy-edit mangled text or do math problems. Both of those should correlate well with general intelligence.
I’d be wary of copy-editing text; for non-native speakers, that could be at least as much a test of English proficiency. Some very intelligent Indian coworkers of mine have to make frequent use of the spellchecker and grammar checker.
Math problems or (for programmers) programming problems are much better.
Twice exceptional people can be truly exceptional in their areas of exceptionality.
Do these job openings need a true generalist, or, like nearly every job today, are they specialized?
Show me your evidence that general intelligence is preferable to higher specific-skill intelligence.
And also show me your evidence that a person with higher general intelligence, but a specific deficit in an intelligence important for a particular job is better than a person with a lower general intelligence, but specific strengths that align with the strengths necessary for the job.
I’d start with Joel Spolsky’s “Smart and Gets Things Done” essay.link text; it gives an approach, not specific questions, but I find it really helpful.
Second, I tend to test smart as knowledgeable about the relevant domain, and makes the less-obvious connections. So for example–I work with financial professionals. Basic competence–knows the products and systems they’ve worked with directly, and the basic toolset for the job. Impress me as particularly smart: know more about their products than they need to do their job–where are the key risks, how do we attempt to manage them, why are they important parts of our product suite, etc. Know about the history of the product designs, and what drove changes over time. See the connection between their specific function and the company overall. How did they learn about things–did they look up industry research?
This approach makes the questions easy to customize to the candidate while remaining predictable and consistent, since I’m probing them on the roles they’ve had specifically.
Thanks for the link!
What you’re asking here is pretty difficult. Rather than try and measure general intelligence directly you should measure proxies that are relevant to the job, even if they incorporate “works hard” and “studies hard” into your measure. Anyway here are some suggestions:
Ask what books they’ve read recently, pick one and ask for a summary and what they thought of the book. Same with TV shows or movies if that is more relevant.
What are the pros and cons of using X method vs. Y method (both methods that they should be familiar with)?
Are there good published papers showing useful techniques for determining who will be a better worker? My impression is that in general, interviews don’t do very well at selecting better candidates.
Technical questions in your field are worthwhile; making sure they know their stuff technically is something you can do. Trying to figure out how well they’ll fit with the office culture is also important. But I don’t know anything more specific to recommend.
there is research on the topic. The short answer is that no method of potential employee evaluation is good, but some methods are better than others, and this gels well with my personal experience.
What I do is go over the position I want filled and think about the qualities that I would want the people in it to have, the knowledge that need to do the job, and the day to day process of work. Then I write up a list of questions about those things. I mean an actual list, I keep score, and if they get a question right I just ask a harder one until I find the depth of their knowledge. The list is important, it helps make sure you don’t forget to ask questions you want to go over, it helps make sure you’re evaluating the candidates equally, and it helps with keeping notes. When I’m done I assess the various segments of the test on a fail, pass, high pass basis, then add in my assessment of the soft factors to come up with an answer. Usually someone emerges pretty clearly on top, but not always.
It’s not a perfect methods, some things are hard to measure for, but I’ve been using it for several years and gotten consistently excellent analysts out of it.
In traveling overseas, what are the biggest differences you’ve noticed with your home country? These are the main things I’ve noticed, as an American that has traveled an above average amount.
Europe:
I am always surprised by how much the local cuisine dominates the restaurant scene in European cities. In the US (especially in the West Coast, where I live), for every “American” restaurant, there are several “ethnic” restaurants, but in Germany German restaurants seem to outnumber any other kind, in France French restaurants dominate, in Italy Italian restaurants, and in Spain, Spanish. Perhaps there is some selection bias, but I tend to avoid tourist areas, so I don’t think that’s it.
I’m always amazed how good the highway drivers in Europe are, and how good the highways are. German highways tend to always have an extra lane over their American counterparts. It’s a very enjoyable experience.
That said, European city streets pale in comparison to American streets, and city drivers in Europe are awful in comparison to the US drivers. I suspect this difference is due to practice, since American’s drive far more, but that doesn’t explain why European highway drivers are better.
South America:
Obviously the differences are more pronounced here, but one thing I wasn’t expecting was how expensive some goods can be. Food is very cheap, but things I take for granted, like a pair of Levi’s, are cost prohibitive. In general though, I’m always amazed how cheap “stuff” is in the US.
Asia:
There’s a feeling of optimism that I don’t see elsewhere. The population knows that things are getting better, and that just really comes through. Also, it’s far more crowded, though I was expecting that.
I haven’t been to Africa or Australia yet.
I recently went to Greece (I’m American).
The first thing I noticed is that the opposite of your general Europe conclusion about highway drivers. I found them abysmal. They were slow, did not have lane discipline, and the longhaul truck drivers are an embarrassment compared to ours, they seem completely untrained.
The second thing I noticed was that Greek restaurants in America are better than those in Greece, in general (there were some outstanding ones, but they were the exception).
Third, or maybe 2B, Athens is a shithole. First, there are people openly doing drugs and pooping/pissing on the streets everywhere, second you feel you are going to be murdered by a motorcyclist anytime you cross the street. Third the cabbies are rude. The only nice people I met in Athens were Canadians.
Fourth, outside Athens there are just a ton of abandoned buildings. Also visible graffiti everywhere in towns that in a similar American town would have none. These are places that are very pretty, outside of the buildings.
Fifth, and last thing that is interesting, the water is amazing. I’ve been to all sorts of American ocean areas, but none are as appealing to look at and swim in.
Overall I would not recommend Greece as a place to vacation. Were I to go back, it will because someone is paying me to go.
Athens is indeed horrible. It has very important archaeology to see though.
I think most people prefer to vacation on the islands, which presumably are nicer.
I haven’t driven in Athens, but I tend to agree with your assessment of it otherwise. It was a fairly disappointing place for me, since I had always wanted to go, but it’s in a sad state right now. They did have better coffee than any other European country I’ve been to though.
Person living in Athens here.
1) Yeah, when people say that the highways in Europe are good they definitely aren’t talking about Greece. Our roads are mostly pretty bad, with a few shining exceptions, like Attiki Odos in Athens. Also, there is a joke that the only zebra crossing that drivers respect in Athens is the one right outside the airport: This lulls incoming tourists into a false sense of security and they are subsequently run over by a car when casually walking on any other zebra crossing in the city 🙂
2) About the restaurants I guess it comes down to taste. In general greek restaurants outside Greece seem like a poor imitation of greek food to me and I avoid them.
3) Well, Athens is not very clean and you are absolutely right about the graffiti. Cab drivers are not as bad as they were say 10 or 15 years ago. As in most big cities there are certain neighborhoods that locals know to avoid but a tourist may walk blindly into. In general, the best plan is to stay in Athens for a couple of days, see the Acropolis and a couple other sights and then just go to a pretty island somewhere and enjoy your vacation.
People don’t litter in Japan. I went to a Ramen Festival in Chiba with maybe 20-30 stalls selling food, at least 5000 people there. Instead of garbage cans all over the place, there was only one central place to throw away/recycle the disposable bowls etc. And there was not one piece of litter on the ground: not even a napkin.
Also in Japan, I visited an elementary school for their annual physical fitness day and was pretty shocked at how run down it was, with rusted playground equipment and old buildings that looked worse than the school I went to in rural California 30 years ago. It looked like a distopian video game setting.
European highway drivers being better has a lot to do with the passing lane being treated as such: you are in it only when passing, and there is a very strong pressure against camping in it. I think city drivers being worse is inextricably linked to the fact that most inner cities in Europe have tiny, non-rectilinear roads – the problem is just much harder than 95% of American cities. (There are other contributing factors to both highway and city differences, obviously, but I think those are two important ones.)
The attitude to pleasure I think is a big difference – in British-descended cultures (so America, Canada, and I believe also Australia) there is a strong tendency to view pleasure in this weird suspicious way, like you’re somehow doing something wrong by doing things purely to make yourself happy in a short-term sense. I believe this results both in shitty food (compare all those cuisines to virtually any other in the entire world), and binge-drinking in a way that is very different from being drunk in other countries. And in pornography, for that matter, where most American porn focusses entirely on the man coming from penetrative sex or blowjobs, which is remarkably different from, say, Japanese porn where even the most awful molester videos or whatever will likely have an extensive oral and manual component. The idea of men having sex without spending a lot of time on their partner’s pleasure just doesn’t seem to be part of the equation.
I think mainland Chinese culture has a very strong tendency to not give a shit about other people who don’t directly benefit you, and a very powerful desire not to help others in need (cf. innumerable horrible videos of what happens after pedestrians get hit by cars). This is, from my understanding, a fairly understandable result of living under the kind of government they have.
@ JayT
I agree with a lot of what you said, but a few less-enthusiastic comments:
Might be true, but I had a hard time finding non-Turkish restaurants both times I visited Germany. I enjoyed that cuisine, but I found it a bit frustrating. In Austria, my wife planned ahead (as she is apt to do) using local recommendations online, and it worked out well.
The other thing that I didn’t enjoy about Europe was the large number of people smoking cigarettes. While I observed much less obesity compared to the US, I often wondered about the future impact of respiratory illnesses over there.
I hear from older Chinese mainlanders that things indeed have improved greatly. Parents are able to fully feed themselves *and* their child (or two children, as the case may be now).
That said, it still feels like a rat race over there, where distrust of others is the norm, and there is a weird pride in bending or even breaking rules. This is probably a common example, but I’ve seen ambulances, with lights flashing and sirens wailing, have a tough time moving through traffic. I’m told this is largely because others assume the ambulance driver is simply trying to get ahead of them, and few cars will attempt to pull aside. Whether this has implications for the prosperity of Chinese society is something that interests me, but would probably result in more vaguery (at least from me)
Yup. It’s a ugly aspect of the society created, I think, by 30 years of a government that encouraged psychopathic behaviour as the norm followed by 40 years of just arbitrary corruption.
Interesting. I live in Europe, and that’s not my impression. I recently had vacations in Germany, in the Netherlands, and a short one in London. I like kebap meat, and had no trouble finding Turkish restaurants that sell kebap (and falafel) in any city. (Admittedly lamb kepap was harder to find than here.) At home too, we have Turkish, Chinese, Thai, Italian restaurants and more, as well as the ones selling local Hungarian cuisine. I haven’t been to America though, so I can’t compare.
I live in Germany and I can’t say I’ve noticed an abundance of German restaurants. But perhaps there are a lot of local cuisine places compared to the USA. I think they are certainly outnumbered by Turkish ones.
Compared to NZ, where I come from, there are far fewer Indian and Chinese restaurants, which of course makes sense when comparing the immigrant demographics.
I hate how cash-dominant Germany is. I always have to carry cash around with me, and it’s doubly annoying because they’re only just starting to make it possible to get cash out at retail stores and supermarkets.
Mobile reception is terrible in Germany, especially when compared to places like Cambodia or Thailand.
Tax in Germany is really high compared to NZ. I get to keep about 55% of my salary here, and then pay another 20% sales tax.
My Mind is my home, and they would like shared ownership. But this is no democracy, it is a dictatorship. And I cannot risk becoming enslaved. Enslaved by their will. I am the only one. Fit to be king.
Yeah, impulses stink. But without those impulses that we personally identify with, what would we have?
The Nine Worthies were, in the medieval imagination, the greatest warriors of all time. Three were from the Anno Domini era: conventionally King Arthur, Charlemagne, and Godfrey of Broth. The six more ancient were divided into Pagan and Jewish triads: Hector, Alexander the Great, Julius Caesar, Joshua, David, and Judah Maccabee.
But should not warriors of the imagination be accompanied by wizards? Who would the Nine Wizards be?
From the Christian era, pickings are scarce because wizardry was (ahem) frowned upon. Still, I’d definitely number Merlin among them.
From the ancient Jewish people, Solomon would count based on his dealings with jinn.
Leaving out Moses?! The man parted a sea!
Elijah should definitely be the third – called down fire from heaven in his contest with the prophets of Baal, and will be the one to herald the coming of the Messiah.
(Although we might be straying into “cleric” rather than “wizard” territory with prophets. Depends how strict your criteria are.)
Yep, that’s the thing.
Solomon definitely counts as a wizard, because the priests/Kohenim were a caste that royalty didn’t belong to. (Islam muddles the “cleric?” question by counting Solomon as a prophet…)
Well by that criteria, does David count as a warrior, or a paladin (presuming he actually did anything other than take the glory from Elhanan)?
Even if David began as a paladin (is slaughtering Philistines for their foreskins really Good?), I’m pretty sure the thing with Uriah would have caused him to fall.
@metacelsus , indeed, David did seem to lose his powers after that. At least, he didn’t go out to war again, and his reign got a lot more troubled with plague and rebellions.
Arthur had Merlin and Charlemagne had Maugris (Old French)/Malagigi (Italian).
Some guy who turned water into wine, walked on water and resurrected multiple dead people including himself.
The staple high-level cleric power.
… oh.
It’s a slow contingent resurrection.
Chuck at SF Debris has a Lazarus of the Week award for episodes that feature characters being brought back from the dead. Tom Paris, hilariously, was Jesus of the Week in the episode Threshold, where accelerated evolution made his body kill him and bring him back to life.
LOL at Godfrey of Broth. The french do make great soups and sauces.
Three of them would be the Three Magi (Balthazar, Casper, Melchior).
No two of the Nine Worthies ever worked together; if the wizards are supposed to be a similar list, you would probably have to pick just one of the Three Magi to include.
I nominate John Dee and Newton as two of the Christian wizards.
Are we allowed to bring this into the 20th Century? Because I’ll nominate the Wizard of Oz and the Wizard of Menlo Park. Also that Oppenheimer guy, on account of successfully casting “Dispel City”. Twice.
Nice. Or horrific.
The Wizard of Oz was a fraud, though.
Results speak for themselves – Oz was closer to Utopia than Britain in Merlin’s time ever was.
The capital of Oz was nice – due entirely to the magical conditions of Oz; it was nice before he got there. The west was entirely in the thrall of a wicked witch using mind controlled monkeys, the east was similarly terrified of a witch. The south was nice, but that was because of Glinda, and the north was okay because of the unnamed witch but a wicked witch lived there and presumably made life unpleasant for her neighbors while she kept the rightful heir to the throne in slavery.
Godfrey of Broth had the Kitchen Magician foisted on him by his arch-enemies, namely all those excessively numerous cooks.
For additional Jewish wizards, I’ll throw in Israel ben Eliezer, the Baal Shem Tov (speaker of the divine word) , and Judah Loew ben Bezalel, creator of the Golem of Prague.
Christian triad: I’d go for Albertus Magnus, Paracelsus, and Georg Cantor. Depending on how strict you are about “Christian”, I might substitute Isaac Newton for Georg Cantor.
They weren’t the greatest warriors, they were the most chivalrous (otherwise the list wouldn’t include Hector and exclude Achilles). Though by that standard I question Caesar and Alexander. In any event, what criteria are to be used for evaluating the worthiness of wizards?
Nominative determinism, place name edition: the most recent large California wildfire (the Kincade fire) appears to have started on Burned Mountain.
That’s not nominative determinism.
That’s just statistics.
On different tracks in schools.
While I do like the idea of tracking students by ability, and having mixed age groups, there is the issue of bullies. In my experience, the kids who repeated a school year were almost always bullies, because they repeated the year due to lack of conscientiousness and caring about punishment (unless they were inmigrants; inmigrants were frequently made to repeat a year for not knowing the language well, which has nothing to do with conscientiousness or rule following).
How do you deal with a 12 year old who has the math and reading skills of a six year old? The kid won’t like being stuck with kids so young; there will be resentfullness, and anger, and the kid is much, much stronger. There will be a lot of temptation to abuse the younger kids, and most kids who are that bad at studying are less rule following.
In here we stick him with the other kids his age who can’t or won’t put in any effort and keep him away from the kids that do care. We do this less than we used to, because God forbid we spend money on anything but education for
rich white people’s kidsour best and brightest, but it’s a solution.Right, if you don’t mix them by age, that would mean more groups. That would mean more money, but at least I see it working.
It would be expensive, though.
It doesn’t really cost more at all. Instead of 10 high schools you get into because your parents made a fuss, you have 5 for whomever, 3 mid-high ones, and 2 for the nerds.
Discipline, followed by expulsion if the child does not respond to discipline. Hopefully this is the same way you deal with any student who keeps bullying their fellow students.
Indeed, bullying should be solved by solving bullying. Not by banning all kinds of things that are kinda associated with bullying, only to find out that at the end we still have lot of bullying, but we don’t have many other things (such as the opportunity to study at your own speed).
The problem with expulsion is political. Not only in the “culture war” sense, but it in the general sense of “it makes some people really angry, and most people avoid making personal enemies”. Specifically, it makes angry the parents of the bullies, and also the parents who imagine their child could be the next expelled bully.
Bullies are a child analogy of violent criminals. Just like we don’t handle crime by waiting for a Superman to eliminate the criminal, we can’t handle bullying by waiting for a heroic teacher who intervenes (and if it turns out the child’s parents are e.g. lawyers and want to take revenge on the teacher, the school administration will likely throw the teacher under the bus to save their own asses). Expelling bullies should be “business as usual”, with clearly defined procedures and rules. We don’t have that. (Well, this is country-specific, so I don’t want to talk for the entire planet here.)
I suppose most people underestimate the seriousness of kid-on-kid violence. They will look at the bully and think “oh, it’s just a small child… the proper punishment would be scolding (because corporal punishment is inhuman and illegal) and maybe an extra homework or two”. Meanwhile the bully is every day stabbing the victim with a pencil, and one day the victim concludes that suicide is the only way to avoid that.
The usual argument is that if we start treating the young bully harshly, expel him from the school and mark him as a “black sheep”, we are creating a self-fulfilling prophecy. The poor young bully now has his carrer ruined, people will see him with distrust, of course he now has no other choice than to turn his back on the society. Now we have made a real criminal in the future, and we will have to deal with it. Therefore… we have to throw the victim(s) under the bus, to give this potential criminal many second chances to return to pro-social life. — I see a point in this argument, but my sympathies are still strongly on the side of victims.
There’s a nice quote from early Soviet teacher Anton Makarenko.
Pedagogic theories claiming that one can’t expel a bully out of the classroom, or thief out of commune (You should rehabilitate them, not expel) is rambling of bourgeoisie individualism used to dramas and “passions” of an individual, no seeing how entire collectives perish because of that, as if those collectives are not made out of individuals as well.
I’m not sure about “bourgeoisie individualism” but the rest rings true to me. A bully is a focal point of conflict and of authority’s decision while the rest of the class is an amorphous mass that does not inspire the same compassion.
I mean we could also do both. Expel the bully and try to rehabilitate them, by placing them in a special class/school that deals expressedly with bullies, as opposed to just expell them and tell the parents “well you figure by yourself what to do with your sociopathic kid now, good luck”. I know this is a marginal approach occasionally used in France, where very unruly kids can end up in special schools with classes of 5-6 pupils at most, with multiple educators per class.
Of course that requires investments and resources, which few people are willing to spend on bullies (let alone the kind of particularly violent and antisocial bullies who get sent to these classes).
I agree that this is what we would do in a perfect situation. In real life, the school will prefer the cheapest solution. Which currently means turning a blind eye; or giving the bully a stern talk and pretending that it solved the situation (from the schools’s perspective it does: the next day the bully will punish the victim for talking, and the victim will stop complaining officially).
A situation where kicking out the bully would be the cheapest solution, would be an improvement over what we have now.
No, the problem with expelling bullies is that bullies are popular and their victims are not. Humans see Bob mistreat Charlie and instinctively think well of Bob and are disgusted by Charlie.
There’s a popular perception/meme that bullies are acting out due to their own problems and weaknesses, but this is because people rewrite every cruel thing done by someone powerful as brave and just. Spectators simply cannot see bullying.
The correct response to being bullied is to leave: exit, not voice. Go elsewhere. If you are attacked by another person, and that person isn’t immediately and strongly punished, you have learned an important fact: you are an acceptable target in this society and everyone here hates you. You cannot make them like you; you can leave, and find somewhere else to be.
This is often difficult, as you’re a child and don’t have explicit choices of where to go. (EY I think said once–in HPMOR I think?–that adults see the problem with bullying as that it sometimes requires adults to take notice of things children do.) But if a child was being bullied, and asked me for advice, I’d tell him to fight back in the most violent way he can possibly imagine. Punch nuts, gouge eyes, fishhook, whatever it takes. He’ll lose, since bullies target the weak, but if he’s lucky, he’ll do enough damage to be expelled, and end up getting to be elsewhere. If not, maybe bullies will see him as weird enough to be not worth the trouble.
Jesus Christ, I was mostly with you until the last paragraph. Gouging eyes as a response to bullying!? Bullies are pretty terrible, but in all but the most extreme cases don’t deserve to have their eyes gouged out! Justice aside, blinding a classmate doesn’t get you expelled to a wonderful new school with new friendly classmates, it gets you expelled to the school where students who gouge people’s eyes go.
My emotions agree with you 100%. My intellect is not so sure, especially that last paragraph. Also, in my experience, a lot of bullies fold if you fight back at all – they aren’t the strongest or fastest, just the meanest.
IIRC, the traditional solution to bullying is to enroll the victim(s) in martial arts classes, in the expectation that this will assist them in winning the next fight.
But that doesn’t work so well when the school has a zero tolerance policy against volence, regardless of who started it, and/o has too few staff to see who started it and/or has teachers/administrators who prefer the bully.
If they are rational then they will avoid needlessly attacking even a weaker opponent that can hurt them, even if they could ultimately win the fight.
So what? They aren’t going to expel anyone anyway, are they?
Though I think you go a little too far, I mostly agree: as was apparently later discovered with Columbine, people have a mistaken impression that bullies (in this case, the most extreme kind) are usually outcasts and misfits; after all, bullying is bad and we are good, therefore “we” would not do that, especially not the children of respected community members Alice and Bob. In my experience not every popular kid is a bully by any stretch, but most, if not all bullies are themselves somewhat popular. The slightly odd kid sitting by himself all the time couldn’t successfully bully even if he wanted to because he himself is too vulnerable a target.
What’s more, in my experience, not only is the bully often a popular kid whose parents are also respected members of the community, but their academic performance is not necessarily the worst. It’s usually not the best, but it’s not at all necessarily lagging way far behind. The bully isn’t stupid, posing another difficulty with any attempt to expel or quarantine him among other “problem” cases.
And I agree that verbally or physically fighting back is the way to go, assuming parents or administrators won’t do anything, though hopefully one begins with verbal insults and shoves before escalating to eye gouges and groin shots in the face of presumably severe, physical bullying. I was told for years to “just ignore them,” which I did, and which only succeeds in sending the message “there are no risks or consequences for bullying this person.”
Sorry, accidentally reported!
It also sends a message to yourself that you deserve the treatment, if defending yourself is ‘bad.’
That is a reasonable thing for a 10 year old to think, but if EY himself thinks it as an adult, it shows a massive lack of emotional intelligence.
Teachers in general do not like to do the things that would stop bullying before it got serious.
Even many parents aren’t willing to do that much for not really dire levels of bullying “Just ignore it”, “fight back”, and such advice is given.
Such as…?
Just generally creating an atmosphere of trust would improve things. For every student to be sure that when they have a problem, there is an adult person who will listen to them. Preferably the adult person (a school psychologist?) would organize regular 1:1 talks with everyone.
Problem is, things like this are hard to systemize. No matter what algorithm and list of checkbox you propose, there is always a way to check all the boxes while making it obvious that the students should better keep their problems to themselves. So… I can imagine it to be done successfully, but only when people really want to… which I assume most would not.
@eric23
As Viliam says, creating trust is very important.
Not making fun of kids, or even publicly humiliating them, for snitching would be a start. In my school, it was quite common for teachers to kick down the kids who reported on others.
Stopping every small behaviour you see. Not downplaying kids’ feelings; respecting them. Don’t force kids who have been in a fight to do group work together*. Let kids who don’t want to interact with other kids be free; don’t force social interactions; let the loner with the book read the book, don’t force them to play with others. Don’t force friendships upon people.
And frequently, teachers themselves are bullies; they designate a kid they hate, and make fun of them, giving the class permission to bully the kid.
I have had teachers who had the attitude EY describes; for them, they would rather not have to deal with it; they don’t care at all whether there is bullying going on, as long as there are no complaints (or suicides or other forms of trouble).
*That thing where you force kids to apologize to each other and shake each other’s hand? It’s abusive. You wouldn’t do it to an adult, wouldn’t you?
Neither of these comments supports EY’s (or his character’s) assertion that “adults” as a class are oblivious and prefer to be oblivious to things children do.
@eric23
Many parents are; most cases of bullying can be stopped by parents who establish high trust relationship with kids, by showing them they have their back.
And parents are the adults who care the most about the kids. The rest are worse.
I had one parent who was very supportive, and the other was completely unapproachable. I was lucky. Many kids have no parent who listens.
Show me a system intended to stop bullying and I will show you a system bullies know how to bastardize to further victimize their targets
Eh, authority cleaves to authority. The parents of the victim typically back up the school, not the kids, because that’s what’s expected of ‘good people’. Kids are notoriously unreliable, you know. (The parents of the bully do otherwise)
When I was in 5th grade, there was a poor kid in my class who was bullied a bit. One day during recess somebody on top of a jungle gym spit on my hat, somebody who was not exactly a bully, at least not to me, but he was much larger and tougher than me. Rather than blame him for spitting on my hat I blamed the poor kid, and I hit him in the arm or something.
Then the poor kid’s dad came storming over from the parking lot. The guy looked like a metallica roadie and he was obviously angry, and he really should not have been anywhere near the parking lot at that time of day so he was clearly spying on us all. Other kids warned me to run away, that he was known for hitting kids, but I stood there frozen, feeling guilty. He yelled at me a bit but that was about it.
The poor kid’s dad got in trouble with the school of course, and not for the first time. The poor kid was very embarrassed and apologized to me about it the next day but I brushed off his apology, and I thought to myself that I wouldn’t be hitting the kid again and I was in the wrong anyway, so probably a good thing his dad did.
Depending on situation, fighting back may have the advantage that the bully can no longer pretend that they were “just playing”.
Speaking from personal experience. I was briefly bullied by a classmate who did martial arts competitions. He often asked other people to punch him, just to show how he can deflect the attack or ignore the pain. I had no realistic way to hurt him by fighting back; he probably would have enjoyed some more interaction.
However, he also tried to keep an image of a good student. So he framed his bullying as “just playing”. Once I started fighting back, this wasn’t possible anymore. He could have easily beaten me to a pulp… but then he could not have defended it as “just playing”. So my desperate bet paid off.
Escalating in violence is a way to deal with bullies.
There are other ways of escalating that help deal with bullies, but those require your parents to be on your side. Unquestionably, with the resources and time they have, to be on your side. It’s mind boggling how rare that is, and it seems like it happens with mostly bullies’ parents.
Parents who will back you when you escalate legally mean a lot. Going to the police over theft of you property; suing for assault; creating lots and lots of trouble for the school, the teachers, the bullies’ parents, etc.
I’ve heard of a case where a girl was bullied; her things would dissapear and appear; sometimes broken. She once walked out of the school and went to the police station and reported the theft of her things, and she wasn’t bothered since.
But that also requires guts, and somebody on your side. The issue is, kids can usually only resort to an escalation in violence; they usually don’t have the tools or skills for legal escalation (writing letters to the ministry of education; making complaints to the education inspector, etc.).
Tracking is a goal for something like 95% of students. The ends of the Bell Curve are separated out into different schools. The Districts here have special schools for specific cases like the above. Those at the very high-end are half-integrated into the normal school system and half-integrated into local universities and their own special classes/teachers.
In general, outside of really poorly treated children, this isn’t particularly true. Little kids are generally not the target of much older bullies, it is slightly younger, or small for their age or odd in way X, Y or Z that gets kids bullied. Lots of bullying is done as status seeking behavior and it isn’t uncommon to find bullies who are protective of younger kids. Put a 10 year old in a class of 6 year olds and he is almost always the biggest, strongest and most capable in enough ways that the fact that he is lousy at math and spelling really doesn’t matter much.
There is a distinction here between bullies and kids who are unable to control their own emotions and who get in fights easily and lash out though, the latter is not who you want around small kids (or really anyone).
But older kids like to avoid interacting with younger kids, and would be pretty resentful if they are forced to be in a class of much smaller kids.
We don’t get much bullying of younger kids by older ones because we don’t force them to interact that much.
Bullying, at least the kind I experienced and witnessed, is far less about physical power or age than about social status.
This reminds me of an old Half Sigma post:
https://halfsigma.typepad.com/half_sigma/2010/08/redshirting.html
You can game the system by “redshirting” your kids. Should you do it? Naively, your kids gain an “extra year” of “free” stuff, of course you will have to make that up. What they lose is a year in the labor market they will never get back. Similarly, putting them ahead will give an extra year in the labor market(though in some cases they would end up failing out and needing an extra year of education, most likely in college) but potentially at the cost of harming their social development. Society as a whole is harmed by redshirting and benefits from letting kids skip grades, so we ought to make it illegal to redshirt kids and illegal to hold kids back a grade, while encouraging bright kids to skip grades if they choose to do so.
It seems to me that if the goal of K-12 education is teaching material, the main problem with it and the main area where reforms could improve it is lack of incentives. I remember being in elementary school and they had this program where you read books, took a best based on the books, and could get a candy bar if you passed enough tests.(I don’t remember the exact details of the scheme.) As I remember it it was a pretty effective motivator and couldn’t have cost much to run. But for the most part I didn’t see incentives to try hard until high school when it did start to matter, though only for the college-bound students. For the non-college bound, there wasn’t an incentive to do anything but the bare minimum necessary to pass. “We’re never gonna use this, it doesn’t matter,” they said. They weren’t wrong.
Experiments have been tried with money and generally aren’t effective because the amount is usually insufficient to be a good incentive. If you want students to spend many hours more studying, 50$ at the end of the semester isn’t worth it. But you could set up an incentive system using the very thing you’re taking away from the students, their time. You could have tests of the subject matter every two weeks and give the kids who pass the next day off, send the kids who don’t back into class to review the material. For younger kids where parents want the free daycare aspect, you could send them to the playground or let them play around on the computer or read in the library, whatever they want to do. If you believe in the massive gains that can come from moving an economy to an incentive-based structure, I don’t see why you shouldn’t expect similar gains in education, exceeding the potential loss due to some kids getting less education-time. There’d be concerns some kids will pickle ree due to the unfairness of the system, but you rarely see the poor pickle ree-ing because of the unfairness of the economic system. If they are taught that this is how the world works, they’ll mostly accept it. There will be concerns students who already know the material will coast and not learn anything more, but you should expect that very thing to happen in our current system if a student is being taught material he already knows, he won’t magically absorb more advanced material.
Astrology isn’t necessary claiming *cause* from the stars, merely *correlation* with the movements of celestial bodies.
So, yeah, you (the original half-sigma author) have discovered that there is something to astrology here.
I doubt there’s all that many people who, looking back upon their life, have said “Boy, I wish I’d started school earlier so I could have gotten more work in before I retired”. Doing better in the dating market is likely a heck of a lot more important. The problem with redshirting is that it’s zero-sum; some kid is always going to be the youngest, and the kid that is the youngest will _always_ be the youngest throughout their childhood and adolescence. If you want to solve the problem with the oldest in a cohort doing better than the youngest in general, you have to get rid of the cohorts by not batching kids by age in the first place. This seems like a difficult problem.
I’m that person, and until my late 20s I was vcel as a rebellion against my biology’s imperative to reproduce.
I was also the youngest male in my cohort (one girl was younger).
Isn’t this a reason to start batching kids in non-grade-level elective classes in middle and high schools? Was your high school not like this?
“I doubt there’s all that many people who, looking back upon their life, have said “Boy, I wish I’d started school earlier so I could have gotten more work in before I retired”. Doing better in the dating market is likely a heck of a lot more important.”
For men above a certain age the dating market correlates pretty strongly with the labor market.
“The problem with redshirting is that it’s zero-sum; some kid is always going to be the youngest, and the kid that is the youngest will _always_ be the youngest throughout their childhood and adolescence. ”
That part of it is zero-sum, but if you account for the lost year in the labor market due to our society’s stupid labor market structure, it’s negative sum.
“Boy, I wish I’d started school earlier so I could have gotten more work in before I retired”.
This is a variant of the classic deathbed regret cliche, which is based on people forgetting they were paid to be in the office.
The best way not to be in the dating market at that age is being successful in the dating market when you’re younger. An extra year of earning earlier in life is not nearly going to make up for being out of step when your peers start dating.
(and yes, you get paid for that extra year. How much difference does it make in the long run?)
A lot of that money goes to things they don’t actually derive much enjoyment from, like a bigger house, an extra car, or (more controversially) a house in a more prestigious school district.
There is also significant pressure in office jobs to work move and move upwards, compared to what people would choose on their own. This consists not only of financial incentives from your boss, but equally of social pressure from your coworkers (to whom you will appear as less competent, and a freeloader on their backs, if you don’t work as much as possible). As someone working in an office job now I can attest to this pressure.
In short, I’m pretty sure the deathbed regret is real.
When you get your first salary, it’s a huge change from “having to beg your parents for pocket money” to “having your own budget”. Depending on your family’s financial situation, a large part of that budget is spent on fun, at least during the first few months. Only later you start thinking more seriously and saving money. At that moment, the job is new and exciting, and you still have some naive expectations about how the sky is the limit. So in short term it feels fantastic.
On the other hand, a few decades later you already have seen dozen failed projects, spent thousands of hours at pointless meetings, and got the memo that you are not going to be the next Einstein. Most of your income goes to the boring regular expenses (mortgage, food, kids…). The only thing you really desire is to take a break; but still have a few decades to go. You are offered a 10% raise of salary in return for 50% more stress at work. There is a strong pressure on you to take it. So you take it. The money somehow disappears (okay, now you live a bit closer to the center of the city, finally have a new car, the whole family now buys slightly more expensive food and clothes, etc.), but the stress remains and damages your body.
The smart solution would be to start sooner and finish sooner. Either the “early retirement”; or have kids in your 20s when you are still full of energy, so that in your 40s the kids move out of home and get their own income, and you can now optimize for a job that doesn’t kill you.
The problem with these death-bed regrets is that, realistically, people will not spend their extra not-work time with friends and family. They will spend their time playing Candy Crush, and then blame their job for making them too exhausted.
Your health comes first, followed by strong human relationships, followed by money.
I can absolutely say that I wish I spent MORE time working when I was younger, because I spent a lot of time goofing off with bullcrap. I suppose I might say the same thing about now 5 years from now, but good God 12 current ADBG thinks 12 hours a day is enough.
@A Definite Beta Guy
I definitely agree with that. There are lots of things I look back and think “I wish I’d worked harder at that than I did”.
1) How do you know? 2) Even if true, it might be a good thing. Downtime after a hard work day is probably a good thing, stress- and health-wise, and very different from spending your entire day as “down time”.
I was bored to tears in nearly every class I had before college (exceptions were a couple math classes and a couple science classes). Starting me in school a year later would have added a year to my time utterly wasted in babysitting operations disguised as schools, for the benefit of being better in sports and more mature. Thanks, but no thanks.
You know, I always suspect there are people who *do* wish they’d spent more time in the office. Think of the scientist who was *almost* on the shortlist for a Nobel prize, but didn’t quite get there. Or the person whose business was *almost* the next Google, but didn’t quite succeed.
Sure, you don’t have to sell me on this perspective. But as far as the normies are concerned I think the social status that those things are associated with does make them happier. At least that’s what revealed preference says. And I think that if they were saw their work hours cut most wouldn’t be much happier, as they’d substitute leisure activities that are also just social-status signalling games and don’t inherently lead to happiness. For instance there’s a bar across the street which plays music very loud, not enough to bother me but enough that I can say that it must be hell for those inside the bar. Why do they put up with it? For a lot of them, I think it’s a signalling game, they want to show how young and virile and tough they are, as opposed to uncool people like me who can’t take it. A lot of high school was like that and my point is that you’d be trading time to participate in that one for time participating in the adult version. A big difference between these games is that the adult version has positive externalities: the consumer can buy more, better, cheaper stuff as a result of that rat-race. No one benefits from the high school popularity rat race.
The deathbed regret cliche reflects social desirability bias, it’s socially desirable to care more about family and leisure and art and travel as opposed to money. But this conflicts with the fact that it’s socially desirable to have money. So you do one thing, say another.
Doing better in the dating market in high school basically means starting with higher status during your formative years and will be correlated with doing better in all markets for the rest of your life. Social status, dating, career, longevity, health, mental health, everything.
This is worth way more than starting work a year earlier.
“Gap years” are a revealed preference for this which will have a much smaller effect than redshirting, and yet they do have a significant effect on college performance.
The same things that make you better off in the dating market will make you better off in the labor market, so I doubt loss of a working year is a trade-off.
It could well be that much of the height effect in the labor market comes from being taller than your peers growing up. Those are your formative years after all.
Does anyone know where I could buy a piece of furniture that is a combination bed/sofa? It would be a twin XL mattress, but with a padded “headboard” along one of the long ends of the frame, so it would also be a sort of couch.
Try “daybed” first. you can also google “day lounge” for other varieties.
This sleeper soda at Ikea is almost the dimensions you want.
It’s 1 inch shorter than a Twin XL, but also 11 inches narrower (without folding down the other half of the mattress that functions as the “headboard”).
Maybe a futon?
For the kind of personality that wants to be remembered, the best job (for a typical person) would be an elementary school teacher. On average you’ll be remembered for the longest period of time by the greatest number of people.
I would think high school teacher would be a better choice. I remember considerably more of them.
I’m the opposite.
A high school teacher definitely wins on the “remembered by the greatest number of people” part. Elementary school teachers have one class of about 25 kids, all day, all school year. High school teachers have totally different classes that size rotating through their classrooms 5 to 7 times a day, and then these change completely halfway through the school year.
My mom was an elementary school PE teacher.
Perhaps a high school PE teacher sees more students on average, but those students are also on average 7-8 years closer to death. So what’s the balance between length of time remembered and number of people remembering you?
Partly I remember my elementary school teachers better because I did have them for 180 days at 6 hours/day, and still saw most of them daily when I was in other grades. There are simply too many teachers in high school to see them everyday, and the interaction is sporadic enough that there are fewer opportunities for memories.
The average high school teacher teaches about 100 different students per semester though, so they only need to be remembered 12.5% as much as a grade school teacher by each student to have the same level of remembrance as them. Also, since high school teachers are around students that are going through more changes in life than grade school teachers, I think they are more likely to say or do something that affects a student’s life.
Epistemological status: My wife is a high school teacher that is often visited by former students that graduated more than 10 years ago. I’ve never known a person that kept in touch with a grade school teacher (unless it was already a family friend).
My mom would be approached by high-school aged and adult former students at times, who would address her by name, and astonishingly to me, would remember their names in return.
On balance, though, you may be right. Especially with things such as high school reunions that keep former students connected to the schools.
The open threads have been less interesting since the last mass banning.
Keep up this kind of talk and I’ll post another screed on toy airplane regulations.
Do it!
Link to the first?
I find world war 2 history interesting.
So a year or two back, I noticed spam calls started to get more sophisticated. The recordings seemed to be much higher quality, to the point where I could briefly be fooled into thinking it was a real person. They also did this Dora the Explorer type thing where they’d pause for you to respond. I was impressed the first time I heard it, though they would then go tell me about my credit card debt (I don’t have a credit card), vehicle warranty (I don’t own a vehicle), or claim to be from Visa/Mastercard account services (I wasn’t aware they had a merger), which kind of gave away the game.
Now the robocalls have gone back to sounding like a low quality recording or a text to speech program. Weirdly some of the text to speech ones still say “This is Heather from $COMPANY…”, as though that’s somehow still believable. It’s a strange thing to notice perhaps, but I can’t help but wonder why the change. Also now I get ones in Chinese. Any idea what the deal with that is?
[Epistemic status: somebody said it on the internets… oh wait.]
I read that first stages of many scams may deliberately be rather stupid and obvious precisely in order to filter people who can be actually fooled by this kind of scam. So that someone intelligent enough who’s sure to see through eventually drops away immediately and they don’t waste… whatever resources go into scam on trying to get them through the subsequent stages. Although it was said about “Ethiopian prince’s inheritance” kind of scam where first stage is really cheap but following stages are much more expensive by comparison, don’t know how it would work for scam calls.
Rollback of regulations. The US government regulates cold calling and those regulations have gotten more lax, leading to more people entering the market of lower quality. Additionally, one of those standards has to do with call quality etc so the rollback has seen the reversal.
I’m pretty sure scam calls are illegal in the first place, I really doubt they’d be paying attention to the regulations of cold-calling.
Yeah, I think they spoof calling numbers and close-up shop every so often to dodge the law.
My impression is that regulations have been tightening. One of the hurdles, however, is that it’s currently (or maybe only recently, I had heard that this regulation was going to be fixed soon) illegal for your phone provider to reject likely spam / scam calls for you, even if you request it. Furthermore, scam overseas call centers are able to ‘spoof’ a random local number on caller ID because of the laws that allow your local politician or doctor’s office to outsource their calls to you and ‘spoof’ the number they want you to see should you want to call them back.
So I get a lot of what looks like local calls to my cell and my desk phone at work that turn out to be cold calls with robots at the other end. I would really like caller ID to somehow be able to tell me that ‘while this appears to be a local phone call, it’s actually coming from a call center in China’. It’s hard to believe that Verizon doesn’t have access to that information.
Verizon does warn me (on my cell) of some suspected spam callers.
The FCC should just require phone companies to prevent caller-ID spoofing, while allowing customers with more than one landline number to arrange which number outbound landline calls are considered to originate from. Caller-ID blocking should remain allowed, to allow for the use case where anonymity is required.
(The canonical example of this is calls from a women’s shelter. Showing no number is just as effective at preserving anonymity as spoofing the number.)
[Epismi-whatsit status: Most of what I ‘know’ of Britain is from watching episodes of Are You Being Served, Black Adder, and Dad’s Army so I really have little to support my thinking]
An idle thought sparked in part by @fion’s U.K. newspaper sub-thread: The current U.K. governmental “Left” and “Right” fit more the 1940’s U.S.A. Left and Right (Truman vs. Dewey) than the current U.S.A. left and right.
The Tories are Eton/Cambridge/Oxford which is roughly equivalent to the U.S.A. “Ivy league”, Labour is led by Jeremy Corbyn and still has ties to labor union left.
In the U.S.A. some Texas Oil Millionaire funding and the ascendancy of Barry Goldwater gave the U.S.A. “the Right” a different non-Ivy cast in the mid 20th century, and the non puplic school teacher labor-left has been impotent since the late 20th century, with the governmental Left being more collegiate now.
I’m sure there’s holes in my scheme, which you may pick apart.
From my understanding, Jeremy Corbyn is kind of the last gasp of that subset of the left though and isn’t really representative of their membership as a whole (although that fact might be wrong since he’s somehow kept his grip on the party). My understanding is the Labour’s leadership is as much or more Tony Blair (oxford) as it is Corbyn. Furthermore, Corbyn himself is from, if not the upper class, at least upper-middle class.
@Aftagley wrote
True, Tony Blair didn’t seem remotely working-class, not even in the “Michael Caine/Maurice Micklewhite Jr. seems ‘posh’ to Americans sense, while Blair’s American equivalent Bill Clinton could turn on an Arkansas accent sometimes and wasn’t born rich (though he did go to Oxford as well, and later became rich).
Also on the American side Truman’s predecessor Roosevelt was very patrician and Derek’s successor Eisenhower was more middle-class, but it’s more a tendency, I have a hard time imagining the Democratic Party nominating someone like Jimmy Carter much today (Sanders is interesting, just about the only urban working class candidate I can think of after maybe Al Smith), on the Republican side, even though his candidacy was recent, I have a hard time imagining someone like Mitt Romney being nominated again soon.
The basic way I like to explain English politics to Americans (though I’m an American myself). Imagine if the 19th century Populist Party had not been absorbed by the Democrats and had evolved into a full on socialist party. Then Britain went through a period of three party system between the Liberals (Democrats), Tories (Republicans), and Labour (Populists/Socialists). But Labour slowly (and then very quickly) ate away at Liberal support to the point the Liberals are now a minor party. British politics is thus basically Republicans against Socialists. Keep in mind socialists are not ‘Democrats but more radical’. They are a distinct group. For example, the Liberal Democrats remain more pro-EU and pro-immigrant than either Labour or the Tories.
This means British politics tended to swing more radically than US politics. It basically switched between conservative monarchist capitalists and nationalizing leveler socialists until roughly the 1990s-2000s. (Worth noting: both sides remained committed to democracy and basically allied with the United States, which ties into the Indian thread on how socialism does not necessarily lead to dictatorial Communism. And also doesn’t cast a great light on people who honeymooned in Moscow when there were vigorous socialist movements in western Europe.) Labour lost ground in the ’70s, partly due to internal conflicts: the party in control of the government and more radical union heads clashed several times. There was also a right wing resurgence in the 1980s, so Labour had to move to the center to remain competitive. These days, it appears to be swinging back to the left.
Also, something I want to highlight: going to a good school didn’t used to be a precondition for the presidency in the US, especially for conservatives. The past five presidents have all had Ivy League educations and that is the longest streak in history. If you exclude those last five (Bush I through Trump), there were fifteen Presidents in the 20th century: ten Republicans and five Democrats. All five Democrats had Ivy League educations. Two of the Republicans did. Eight did not, though they all had college educations. This is the opposite of England where (as you identify) Etonian education and the like is much more of a conservative marker.
@Erusian,
That’s seems a good tutorial on the British, thanks!
On your history of recent Presidents schools I’ll add that U.S. Supreme Court Justices didn’t used to all go to Harvard or Yale, one that was nominated in the 1940’s didn’t even go to law school, he apprenticed as a paralegal (it’s now rare but some still become attorneys that way).
Exactly. That was one big reason I was disappointed when Trump failed to nominate Amy Coney Barrett. (The other one was that I consider her conservative Catholic ties a plus, in that they’d make her more sensitive to religious liberty issues.)
You’ve left out the part where the socialists and Social Democrats split, and the Social Democrats merged with the Liberals (which is why they are now the “Liberal Democrat” party and not just the “Liberal” party).
I plead my moniker of ‘basic’. The split there was complicated and didn’t reverse the decline of the Liberals so I glossed it over.
A nitpick: Harry Truman did not have an Ivy League education, or even a degree. He took classes in business and law from a couple different schools in Missouri, but never graduated.
Labour under Blair was much closer, ideologically speaking, to the US Democratic party of the time. The difference today is that in the UK the party members decided to put a socialist in charge, whereas in the US they just came close (see the 2016 Democratic primary).
@broblawsky,
True, and Labour going socialist is returning to it’s roots, while the closest the U.S. got to socialism (and indeed Mussolini style fascism while also fighting fascists) was during the second world war when Henry Wallace was still Vice-president.
Anyway, my larger point (if I had one) is that both in style and substance the U.K. and U.S.’ ‘left and right’ really don’t seem to match up one for one.
Off the top of my head the (right) Christian Democrats of Germany looks like the moderate wing of the U.S. Democratic Party, and the (left) Social Democrats of Germany look more like the progressive wing of the U.S. Democratic Party combined with the British Labour Party, with no real equivalent of our Republican Party.
Different U.S. states parties used to be different aa well, the Minnesota Democratic–Farmer–Labor Party used to be quite different from the Mississippi Democratic Party, but now they’re pretty much identical to each other and the national Democratic Party, New York Republicans used to be different than Arizona Republicans, et cetera.
I think there are still large differences between different regions in the two parties, although these do not consist of large regions anymore but mini-regions. I worked with a guy a few years ago who was a Democratic activist from Atlanta who moved to Minnesota for a job. He was living in Minneapolis, but when he went to a Democratic caucus in the city, he was shocked by the ideology there. He told me they were essentially all socialists, which sounds consistent to me with the usual politician rhetoric I hear around here. I don’t think he would have had the same reaction if he had gone to a suburban Democratic caucus, and certainly not if he’d gone to a rural one in Minnesota. So there still are great differences within each party in different areas.
One of the historical facts that tends to get left out of the usual American picture is that FDR was generally pro-Mussolini before the war and that his economic approach in the first New Deal was essentially fascist. We are used to thinking of “fascism” as a term of abuse, but that isn’t how it was seen in its early years.
I feel like aggressively conservative folks emphasize that a lot, which is why the “fascism is right-wing/fascism is left-wing” argument always heats up.
I think that someone here (perhaps you) called the New Deal “the greatest hits of fascism and communism”, which I found pretty accurate.
What’s interesting to me is that the historicity argument is so disconnected from actual policy. On the right people use socialist as an epithet, on the left some people embrace it as a badge of honor, but the socialist policy package is universally rejected. The number of people that actually want to nationalize google or walmart round down to zero. Likewise while you can get plenty of people to praise the New Deal, and even some that want to revive the WPA, you don’t find anyone that wants to set up industry councils where wages, prices, and production volumes are hashed out by agreement between workers, owners, and the government.
(All from a US perspective. There may well be non-trivial numbers of actual socialists and (economic)fascists in other countries.)
How does the New Deal materially differ from elements of feudalism?
And during the times of feudalism, was there a left wing and a right wing? Perhaps it came down to who owned the land (non-militant religious orders = left wing, lords = right wing?; agricultural land = right wing, cities = left wing?)
This sort of assertion never felt very fair to me. While I can agree that Warren, Sanders, et al. aren’t saying the N-word, they’re still saying they should be able to do all sorts of things that end up meaning the same thing. They still distrust any economic agent that makes a profit, and act consistently with a belief that they should be able to approve or disapprove of any source of that profit.
Meanwhile on the other side, what you are saying, and I’ve heard it before, strikes me as very unfair. In my mind there is a huge gulf between ‘is skeptical of profit and in favor of lots of regulations’ and ‘wants to collectivize the means of production’. Not sure how to bridge that gap.
Well, to bridge it, you’ll have to convince me that Warren, Sanders, et al. don’t want to collectivize the means of production. Given that Warren keeps saying she wants to do things that will functionally result in collectivizing the means of production, and Sanders, among other things, applauded Chavez for actually collectivizing the means of production, I admit, you have a hard task ahead of you.
Merely stating there’s a huge gulf in your mind won’t be enough; surely you can see why not?
Name three.
Here’s six.
“You built a factory out there, good for you. But I want to be clear. You moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for.”
“Now look, you built a factory and it turned into something terrific or a great idea, God Bless, keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and paid forward for the next kid who comes along.”
“I hear all this, you know, ‘Well, this is class warfare, this is whatever.’ No. There is nobody in this country who got rich on his own – nobody.”
“Other countries around the world make employees and retirees first in the priority. For example, in Mexico, the bankruptcy laws say if a company wants to go bankrupt… obligations to employees and retirees will have a first priority. That has an effect on every negotiation that takes place with every company in Mexico.”
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
[reposted in the correct location]
Not a normative claim at all.
Is a normative claim! But the normative claim is “taxes should exist”… not equivalent to “seize the means of production”…
Also not a normative claim.
Being pedantic, not a normative claim. If the implied claim “we should change our bankruptcy laws to be more like Mexico’s” counts as “collectivising the means of production” by your definition then you really should’ve said so before, since that’s certainly not the usual ones. Not doing so makes it seem like you’re trying to smuggle in the various connotations of the kolkhozes etc. to an obviously incomparable situation.
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
N o t a n o r m a t i v e c l a i m
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
Kind of a normative claim I guess, but it hardly seems specific enough to be an example. Unless you’re saying that anything other than “obsession with maximising shareholder returns at all costs” is collectivising the means of production.
Every claim you assert as not normative either implies one, or is most easily explained as motivated by a normative belief Warren holds, that government ought to play the parent to wayward private interests. The only way she seems to see to do that is to collectivize various parts of the economy, whether she’ll admit it or not. You can argue that she doesn’t want absolute collectivization, but that’s very faint praise to an audience that thinks we’re already collectivized to the point that any additional amount is harmful to people, and doubly so if it’s coupled with shaming.
For example, Warren says taxes should exist. She also implies they’re a moral good, and conspicuously fails to favor any limit on them, or even to consider the possibility that those limits might already be surpassed – her quote is a moral sneer at anyone who thinks they are.
“Maximize shareholder returns at all costs” is a common bogeyman I see from the left. Sure, capitalists believe this is the role of a CEO. But that’s only in the context of a larger system in which corporations are necessarily limited by the ability of consumers to go elsewhere with their business. If the world’s most ruthless CEO is compelled to maximize shareholder returns by offering a product so efficient that it provides every consumer a better return than if they do without, then suddenly “maximizing shareholder returns” sounds like a really good thing.
If a CEO were allowed to force consumers to buy the corporation’s product, then we’d have a problem. But that’s a problem only if no one else is allowed to offer that product, and that’s only the case if some external body made a rule forbidding it, and forced everyone in the market to follow that rule. But no matter how you slice it, that wouldn’t be capitalism. If Warren is truly capitalist to her bones, she’d call to abolish such rules rather than try to set up even more, but I’ve heard her make zero noises about doing so, which forces me to believe she’s fibbing about being capitalist to her bones.
Right, so as I suspected you are equivocating between collectivisation and any kind of taxes and regulation. You must surely be aware that “collectivisation” and doing things to “the means of production” have certain connotations. Therefore I must conclude that either:
1. You are badly mistaken about which of those connotations are widely considered salient. If so, you should know that most people consider an association with Marxism, dictatorship and mass deaths etc. to be the salient features, and you will therefore be communicating ineffectively if you use “collectivising the means of production” to refer to completely different things.
2. You have some argument about how Warren’s policies would lead to millions of deaths if implemented.
or
3. You are being deliberately dishonest.
Which is it?
4. You’ve declared your own definition of what’s salient, that fails to address the genuine annoyance of people with sanctimonious attempts to tell them what they can do with their wealth;
ignored the examples from history of the wealth-destroying end wealth-creation-avoiding economic incentives that such policies put into play, which can result in mass starvation if a society doggedly continues such policies, though likely will just result in their eventual repeal and a lot of wasted time;
and made a rude accusation of dishonesty with a bad faith argument that makes me very uninterested in indulging your approach to discussion.
No I haven’t. I’m saying that everyone except for (or, I strongly suspect, including) you associates “collectivising the means of production” with kulaks not taxes. This is an empirical claim, not like my opinion man. Do you seriously disagree with it? Or was 3. on the mark?
I think taxes are so far away from the central example of “collectivizing the means of production” that we aren’t even speaking the same language. To be fair, I don’t think that mass starvation is required either. The central example would be a call to nationalize some company or industry, something that is conspicuously absent.
It’s perfectly fine to be some kind of ultra-libertarian, but it isn’t so great to argue semantics using private definitions flowing from the ultra-libertarianism. Self awareness of idiosyncrasy seems like a reasonable ask in a conversation.
@brad:
Does the health insurance industry not count?
Some notes on Kamala Harris’ record on criminal justice issues as a prosecutor.
Her persecution of Backpage also bothers me, though admittedly most of the candidates look bad on similar issues. But her campaign has had plenty of attention and plenty of time to catch on, and it really hasn’t; I think if somebody’s going to surge late and be a surprise, it’s going to be somebody else, not Harris. And I don’t really expect that from anyone, really. I think it will be Warren, though obviously the long time and still current frontrunner Biden can’t be counted out.
Trump’s pullback from protecting Kurds in Syria doesn’t seem to involve actually pulling out of the middle east.. This strikes me as the kind of story neither side’s partisans have any incentive to write….
?
Trump is being feckless and not his policies aren’t cohesive. That seems like a pretty easy partisan story to write.
I’ve seen a couple of articles about it on left-ish sites, but the sad fact is that at this point “American troops are somewhere in the Middle East” is no longer a particularly significant story. Impeachment is more relevant and more attention-grabbing right now.
Not only are we not pulling out of the middle east, we’re not pulling out of Syria.
Has anyone checked out the recently-released 3rd episode of Canadian-Wilderness survival game The Long Dark?
I’ve been busy this week and haven’t gotten to spend more than a couple of hours with it yet. I whfg tbg gb gur cynarpenfu naq sbhaq gur fheivibe.
I’m not really sure what I think so far. I recently replayed the previous two episodes and mostly enjoyed them, although not as much as I did on my first play through. I know I’m pretty near the start of the episode, but there are some things I’m not liking so far:
1. There are a crazy number of wolves. Within the first 20 minutes of the game I was being stalking by 4 at a time. With how dense these packs are around places the game forces you to go, I constantly feel like I’m forced to play the game with a gun in my hand all the time which I feel weakens the experience kinda.
2. I haven’t like the timber wolves so far – it seems like they demand a level of aiming to deal with that the game isn’t optimized for.
3. I don’t like the change in tone. Episodes one and two were all about mostly being alone in a quasi-mystical apocalypse. The characters that you did meet were all archetypes and seemed more like characters out of a myth than real people. They’ve changed that here to being more grounded with other characters, and I don’t know if that fits the overall aesthetic
Why is it that every time I read about Brexit these days, I am reminded of this?
Now I can’t unsee it.
A teethgrinding “Thanks!” to you.
Any thoughts on the final Indonesian report on the Lion Air crash last year? I caught the tail end (sorry, couldn’t help myself) of a radio piece; it sounds like the ground crew broke a sensor, a previous flight crew handled the problem just fine but didn’t report it, and the final crew was undertrained and so weren’t able to remain in control? My sense is that this casts the whole situation as more of a series of preventable errors than the all-Boeing’s-fault narrative I was getting.
Sensors break all the time, if an aircraft critically depends on a single sensor, then it’s badly designed.
For fans of stoicism and heavy metal, Aephanemer’s “Prokopton” is out today in the US – a melodeath album whose lyrics expound upon stoic philosophy.
The combination of vigorous, wild, dramatic melodeath with the level-headed equinimitas of stoicism is a combo on the level of pepperoni-and-calculus or “a skit in a progressive metal album that sounds like a Third-Reich rally for Satanists but is actually a recipe for pot brownies.”
(Thank goodness Prokopton is so good, it utterly redeems the melodeath genre.)
—
What’s the most surprising confluence of unrelated disciplines you’ve encountered?
Nice! I fell in love with this record when it first came out in May (today’s release is a rerelease on Napalm Records), but I never caught the stoic edge of the lyrics, just that sweet sweet melodeath. I’ll have to pay more attention to the lyrics next time I listen to it.
If you like melodeath with excellent lyrics, Aether Realm’s Tarot is essential.
Zipf’s Law
What struck me was that “the” is 6% of the words used in English. A lot of languages don’t even have “the”! It seems weird that we make so much use of a word which may be unnecessary. Do the languages without “the” make the “the/a” distinction some other way? Or do without it?
Are there are other extremely common words which aren’t universal?
We actually had a lot of discussion on a/the over the past few open threads.
My contention is that articles are virtually entirely useless in the vast majority of cases in which they’re used (fun game: try and think of sentences in a realistic setting where the meaning of a/the isn’t apparent from the context).
Others pushed back on this, but there seemed to be pretty much agreement that they were not necessary in many cases.
Languages which don’t use articles or an equivalent grammatical construction (many East Asian languages, for example) will have to resort to using some kind of phrase in their place, in the rare cases where it actually makes any difference.
Doesn’t the indefinite article imply plurality?
Moby Dick : Ahab chased {a, the} whale
Lord of { ,the} Rings : {a, the} ring was destroyed
Germany 1932 : Nazis won {an, the} important election
If you’re reading Moby Dick, you know what whale he’s talking about. When there’s the potential for ambiguity, “the” is usually referring to items that have already been discussed in the very recent past, so you know which of the possible referents it refers to.
Consider that in my previous sentence, “the” appears 3 times outside of quotation marks: now try removing those occurrences from the sentence and explain to me what meaning has been lost? It sounds weird because we speak a language where we have to use it, but that’s just a feature of the language.
I’m not claiming articles don’t specify meaning, just (a) usually it’s unimportant to the speaker’s intentions (again, remove “the” before “speaker’s” just there – what have we lost?), (b) when this meaning is important, it’s usually available from context, and (c) when the meaning isn’t available from context, you can always use a phrase or sentence to specify what you mean. As proof, I submit to you the fact that probably the majority of people in the world use languages without articles, and they seem to be able to communicate pretty well.
Did you mean “the whale”? Arguably, “what whale” could just as easily refer to a species of whale, rather than to a specific whale.
Your points are well-taken (and this is all for fun, anyway). But how about another example : “Did you cut down the tree today?” This is basically to say, “There is a particular tree you were going to cut down, and I would like to know if you cut that particular tree down today.”
“Did you cut down tree today?” feels awkward, but preserves essential meaning.
“You cut down tree today?” removes the arguably-unnecessary did, if the speaker is addressing the treecutter after the anticipated time of treecutting.
“Cut down tree today?” removes you in favor of context, as hopefully the speaker is addressing the person who was to do the tree-cutting.
“Cut tree today?” because arguably to cut a tree “up,” that is, into discrete pieces, is just as correct as to cut it “down,” and therefore each is made irrelevant by the other?
“Cut tree?” because hopefully the speaker is addressing the treecutter in a manner that is relatively time-local but also antecedent to the anticipated treecutting.
“Cut?” because it is likely that the magnitude of cutting down a tree would preclude (or simply dwarf in importance) the acceptance of other cutting-type tasks, so context tells us that treecutting is likely the only cutting that the speaker would be asking about.
“?” because for God’s sake how many times do I have to ask for you to cut down the stupid tree?
Sidetrack: I like the way meaning of “the city” or “the party” are highly context specific.
Well, to be honest, communication in this old marriage situation you’re alluding to doesn’t need any words. A mere glance, or ffs, a lack of glance or a lack of noise conveys the “Did you cut down the tree today” well enough :p 😀
I’m a God, not the God.
To pedantically reply to a Bill Murray line… imagine a language without articles where he says something like “I’m one god, not only god”.
It’s not quite as pithy, but the meaning is there. (Well, it occurs to me that the meaning of “only” is ambiguous because it can mean both “the one example of X” and “entirely composed of X” – let’s pretend it only means the former.)
This is actually hard to do with many/most words, especially grammatical words, because normal language is so redundant. If you mishear a word that someone spoke to you in conversation, you can usually reconstruct it by the end of the sentence.
Yup.
Are there languages where articles disappeared? If not, that would seem to be strong evidence that they are indeed useful.
Asking about this brought up a somewhat example of this in Uralic language, which all exhibit a feature of the third person possessive suffix denotes definiteness as well, except the Baltic Finnic languages (Finnish, Estonian) and Hungarian, which seem to have lost that feature (the fact that it’s present in all the other branches imply that it was a feature of the proto-language).
That said it’s the kind of thing which is difficult to prove anyway: most languages don’t have an extensive written history (and for most of those who do, it’s rarely much more than a handful of centuries), and it’s really hard to reconstruct a lost feature from contemporary forms alone, so this is a particular case where absence of evidence is only weak evidence of absence.
I’m sure Scott’s doing his job to improve the the fraction.
I see what you did there
Attack of the the Eye Creatures
God, as soon as I parsed the the sentence I knew that if I went back and read it more slowly there would be an additional article.
I seem to recall a general principle in languages that the frequency of usage of words is inversely proportional to their information content. So the most common words tend to be words like “the”, “a”, “and”, “is” which convey very little information and indeed can often be left out entirely in many languages (a lot of languages don’t bother with “the dog is black and white” and just say “dog black white”).
That doesn’t mean that “a/the” conveys no information of course. Information conveyed by the article can include notably:
—New vs old information; “I saw a man [new]; the man [old] told me that…”
—Hypothetical or generic vs real referent; “A man could do this [any man, the idea of a man], but the man who did that… [the actual, real person]”
—Specific vs universal; “A boar can gore a tiger [claim about the potential of individual boars]; the siberian tiger is a fierce predator [claim about all siberian tigers]”
Languages that lack such a distinction may have other strategies, like:
—Word order; Russian tends to put new information near the end of the sentence, and old information near the beginning.
—Use of demonstrative adjectives, which can weaken over time and become definite articles — this is actually what happened in Germanic languages (including English), Romance languages, Greek and South-Eastern Slavic languages like Bulgarian — the, der, le, el or o are all etymologically going back to definite articles.
—Use of a “topic comment” structure which marks a particular word in the sentence as salient, eg Japanese normal sentence “gohan o tabemashita” rice [object] eat[past] “I ate some rice” vs “gohan wa tabemashita” rice [topic] eat[past] “I ate the rice; speaking of the rice, I ate it”.
That’s a better explanation of the difference between wa and o than I got when I was learning Japanese.
You can think of “the” as a grammatical element (which just happens to be a word rather than an affix), like the past tense suffix “-ed” or the plural suffix “-s”. A characteristic of grammatical elements is that they have to be used where they are applicable, rather than only being used when the speaker explicitly wants to convey that information, and therefore are a lot more common than they may need to be. Languages without a given grammatical element will generally have some non-grammatical way of indicating the same distinction, the difference being that the speaker has the choice as to whether to use it.
“Please” doesn’t have an equivalent in some languages.
Articles in English are mostly semantically weak: in most cases they don’t convey any meaning that can’t be inferred from the context (which is probably why the the trick works). In the rare cases where they do convey meaning, you can replace “a” with “some” and “the” with “this/that”, preserving the meaning.
Most languages, even those without articles, have determiners (either function words or affixes) that represent these meaning distinctions. E.g. in Latin: “Persuāsīt populō ut eā pecūniā classis aedificārētur” – “He persuaded the people that a fleet should be built with the money (with that money)”
Yes.
Until I clicked your link, I was going to say the answer is “no”.
I love this video of an Irish woman who refuses to say yes or no.
@Wrong Species,
I prefer to believe that the Irish woman in the video is a close cousin of @Deiseach
I think my favorite part about that wikipedia article is that I don’t even agree with some of the English examples of usage of such a basic word as “Yes”.
I thought that answering yes in English to that question meant “Yes, I am not going”. Although I would usually view “yes” as somewhat ambiguous and would say “yes, I am not going”.
Although I think “no, I am not going” means the same thing as “yes, I am not going”.
Hrrrrmm… that’s rather confusing.
Selection error. The Germans have 16 words for “the”. Put ’em all together and it’s probably comparable in frequency.
(Well, six, but you get the idea.)
It’s not 16 different words, it’s different forms of the same word. Word frequency analysis in English would not, say, count “be”, “am”, “are”, “is”, “was”, “were”, “been” as different words either.
Tell that to this NYT crossword.
What do you guys think of plans to increase the size of the supreme court?
I hope every President does it to pack the court and it becomes an out-of-control arms race with a SCOTUS bigger than the Congress in a few decades. By 2100 every citizen is part of the SCOTUS. Of course it’s no longer practical for it to gather in one place so they elect a guy who names a nine-person subcommittee to do the actual legal work, and the rest of the SCOTUS just goes along with it.
Meh. “Every man a Justice” don’t have the same ring to it.
It’s a terrible idea, but it’s basically inevitable in a post Garland world.
s/Garland/Bork
Robert Bork received confirmation hearings and a floor vote, which he straight up lost 42-58, including no votes from Republican senators Lincoln Chafee, Bob Packwood, Arlen Spector, Robert Stafford, John Warner, and Lowell Weicker, as well as all Democratic Senators.
But Ted Kennedy was mean in his speech that accurately cast Bork as an extremist even compared to other conservative judges, which is clearly just as bad as Mitch McConnell refusing to hold either hearings or a vote.
As I read him @Jaskologist isn’t suggesting the two events are equatable, he’s suggesting the inevitability Brad is referring to can be traced back to Bork.
Why would it be inevitable? Jaskologist may or not be in agreement with Republicans who think there was anything untoward about a judge losing a confirmation vote, but it’s an indictment of them, if not of him, if that was the start of a downward spiral.
It’s times like this I wish nominative determinism was a stronger force.
Senator Lincoln Chafee, who looks like Abraham Lincoln constantly scratching himself.
Bob Packwood, who packs wood into
truckswomen. That one works!Arlen Spector, who serves his terms after death.
I mean we could say “post eden” world too. But I don’t think the Bork escalation was a) a first cause b) the proximate cause, or c) the most important cause.
Garland is at least b, if only debatably c.
@Matthew S.
The Republicans retaliation to Bork was Thomas. They nominated a very conservative and controversial jurist and crammed him through.
The Democrats retaliation to Thomas was the filibuster under Bush II. They blocked many qualified jurists in the lower courts.
The Republicans response to the filibuster under Bush was Garland.
Those are all relatively proportional retaliations.
Moving to “packing the Court” would be a dramatic escalation. Not that I don’t think the Democrats would, but there is a big difference between “Tit for tat” and “nuke for tat”.
A proportional retaliation would be for a Democratic Senate in 2020 or 2022 say that they wouldn’t advance any of Trump’s nominees.
I was under the opinion that this was a retaliation against SCOTUS for the Florida decision that saw Bush II as President.
And no, altering SCOTUS is in no way proportional to any alteration at all of lower courts.
Garland was pretty precedent-following.
Very few opposition parties have confirmed SCOTUS members for lame duck presidents.
Lame duck has generally referred to the period between a presidential election where the incumbent isn’t running / didn’t win and the inauguration of the next president. Not the entire second term of a presidency.
FWIW I also expect the democrats to create new states the next time the House/Senate/White House align.
And no, altering SCOTUS is in no way proportional to any alteration at all of lower courts.
I’d be inclined to disagree in cases like Estrada, where the lower court filibuster was primarily driven by not wanting the nominee to be a possible future Supreme Court nominee.
I’d put the inevitability back further than Bork, though–I think the giant Cathedral power-grab of the 1950’s and 1960’s managed through the Supreme Court made the court a political body, and the escalation since was at that point inevitable.
@brad
I agree that Garland will be the most recent reason that Democrats will point to, assuming there isn’t another offense between now and then.
But blocking a Supreme Court nominee, with or without a vote, is not exactly precedent setting. It’s happened many times before, it will happen again.
@EchoChaos
Usually a have high conviction about ought and low conviction about is. Ought after all lives mostly in my head. In this case that’s flipped. I don’t have especially strong feelings about whose fault it all is or whether this or that escalation is a reasonable one; but I’m pretty sure 1) Court packing is coming and 2) would be further off if Garland had been confirmed, and probably even if he had gotten hearings.
@brad
Both of those seem true statements to me, although part of the reason they are true is that turning the Court from leaning left to leaning right on social issues (it has leaned mildly right on fiscal since about Reagan) is a BIG DEAL to the left.
Garland would’ve put that day further out, perhaps indefinitely. Gorsuch and especially Kavanaugh bring it further in, if it isn’t here now.
Lincoln Chafee was in the Senate 1999-2007. He inherited the seat from his father, John Chafee. Then he was reelected and served a complete term. John Chafee was in the Senate from 1976 to his death and voted against Bork. Your other names are correct. While we’re at it, David Boren (D-OK) and Ernest Hollings (D-SC) voted for Bork.
It depends on whether you want to make it a farce or not.
It seems like a dumb idea, but right now I’m optimistic that it’s dumb enough that nobody will actually try to do it, rather than so dumb that it will inevitably be done.
Like the idea of President Obama paying for his various plans by ordering the mint to make a trillion-dollar coin.
I would like to see it increase in size for the reasons mentioned below, and by means mentioned below:
Means:
Optional 1) The office of the President periodically polls members of federal and state courts and bars (i.e. justices and lawyers) on who they recommend for a SCOTUS appointment, and why.
Semi-optional 2) From those people who have multiple recommendations (preferably from multiple court districts) the office of the President selects a half dozen or so candidates that they find preferable.
3) These half dozen or so candidates are submitted to the current Senate, who investigate, advise, and vote on them. The candidates who receive greater than 50% of the vote are next in line for an open SCOTUS seat, in the order in which they received the most votes (or as determined by an additional vote of the Senate if there is a tie).
With the exceptions of openings that occur immediately after an election, this will help prevent lengthy periods of absent Justices, and will likely prefer more moderate candidates (whether that’s a pro or a con I don’t know).
Note: For the initial increase in membership I’d like either a Senate supermajority requirement for each new member, some sort of judicial democracy that accounts for the wishes of the political minority, or increase it by an even number and allow each “party” in the Senate to select a candidate on a 1-to-1 basis, with the President agreeing to “nominate” said candidates, and the entire Senate agreeing to confirm said candidates as a group.
Reasons:
Originally it was intended that there would be a SCOTUS judge per district court, so that the SCOTUS judge could ride their district when SCOTUS was not in session. Well, we’ve got more than 9 districts now.
We’ve got a lot more people, and I would like a bit more diversity on the court (all meanings of the word). Mandatory term limits could help this (though placing the former Justice into one of the Federal District courts instead of actual forced retirement would likely serve the purpose of the founders in having no term limits), but so would increasing the number of Justices and letting demographics take their course.
We’ve got a lot more laws and cases, and I would like more of these cases taken up by SCOTUS than currently occurs. Settled law, that is universally applicable across the nation, is preferable to patchwork laws. To these ends I’d like SCOTUS to have the ability to subdivide its members into groups of 5 or 7 or some odd-number to hear and decide cases. Following the decision the decision would immediately be reviewed by the full court, requiring a majority of the SCOTUS to vote to re-decide (or even re-hear) it, otherwise the decision stands in as binding a manner as if the full SCOTUS decided (this would decrease those instances where various circuits have contradictory rulings). Increasing the number of Justices to 15 or so would allow this sort of framework.
I don’t think it’ll happen. Federal law limits the size of the supreme court, so you’d need the president, supermajority of the senate, and majority of the house to all agree it’s a good idea. That means all those people being the same party (itself unlikely) and also all those people not worrying about what happens when the other party takes power.
(edited for typo)
The supermajority of the senate requirement is at the sufferance a majority of the senate, and is not long for this world IMO.
It is true that in order for this, or much of any new legislation going forward, both Houses of Congress and the White House need to be in the hands of one party. This tends to happen the first Congress when a new President takes over. The last few Congresses where it was true were: 115th (Trump), 111th (Obama), 109th (GWB), 108th (GWB), 103rd (Clinton). The 119th Congress (Jan 2025) would be the one I’d put my money on for packing the Court.
One of the difference between French and American culinary practice is that the French tend to be conscious of the seasonality of produce. Local fruits and vegetables are usually better than those that have to be imported from afar, but they are only available during some parts of the year, toward the end of the growing season. As I understand it, the French tend to be aware of this, and gear their cooking to what is available, whereas Americans (and Canadians, for that matter) just get produce from wherever (local or imported) and cook the same stuff year round.
That raises the question of what the French do when local produce isn’t available, such as during the northern-hemisphere winter. Do they eat old-style preserves? Or do they put on T-shirts and baseball caps, LARP as American idiots, and buy imported veggies?
Probably a bit of both.
Hold one’s nose and buy moroccan strawberries, but don’t do it as much, knowing that you can go to a farmers’ market in a few months.
English christmas baking tradition is, of course, very heavy on dried fruit in mincemeat, christmas pudding, christmas cake etc.
Some produce keeps a long time. “Légumes d’hiver” is an actual thing, it includes (some) squashes, leeks, lentils, etc. Lots of potatoes.
Oh and roasted/steamed chestnuts. That’s a major flavour of late autumn/winter in traditional cooking.
Of course now everything can be flash-frozen so it’s not as much a consideration.
Even apples keep through most of the winter (if stored correctly).
I think that eating the same stuff all year is a fairly recent development. I remember being a kid in the ’80s and the food we ate was heavily influenced by the season. Also, most Americans live within a day or two of places that can grow produce year round, so it makes it a lot easier to eat the things you like all the time. I don’t know if France has that same access to high quality produce.
It also seems that France’s food culture is stronger than America’s, so they are going to be slower to change the way they eat in a given season, even if summer vegetables are available.
You seem to have a very negative opinion of American food culture. May I ask which parts of the US you’re using in this comparison?
As a resident of the northern midwest, seasonality of produce has exerted a strong influence on my menu for my entire life. Sausage stuffed zucchini is delicious, but is only generally made in mid-to-late summer (when the zucchini have grown large enough for proper stuffing). This is despite the fact that meat-and-starch-stuffed-squash, as a heavier and richer dish, fits more easily into a cold-weather menu. As we get towards fall, the stuffing recipe changes and we start stuffing pumpkins instead (the pumpkin stuffing involves cranberries, which balance nicely with pumpkin but would tend to overpower the milder-flavored zucchini). Once pumpkin season has passed, the squash in our diet changes to mostly spaghetti and other winter squashes, but those don’t tend to stuff as well, so those recipes don’t usually come out again until next year’s zucchini is ready.
These trends are even more pronounced with fruit. I don’t think many Michigan residents makes it past five years old without knowing when apple season is and what recipes only get made then. Traverse City’s biggest tourism event is centered around the cherry harvest.
This isn’t to say that we never use imported or preserved produce, but casting this as “Americans don’t know the difference between seasonal and non-seasonal produce” seems like a huge jump that I have a hard time imagining justification for.
I felt this was a poignant description of the appeal that fiction has:
https://www.newyorker.com/magazine/2011/09/05/town-of-cats
Partial Vegetarianism
If the inconvenience of choosing the vegetarian option is less than X, you must. Otherwise you may eat meat. The factors of inconvenience can include the price, taste, effort of asking etc.
How about nutrition?
I’ve heard of that concept under “reducitarianism”.
Yep, I basically do that, mostly to work around the problem of the “token vegetarian option”. If I have to choose from a menu of 10 things, but the one or two vegetarian things don’t appeal to me at all, then the other 8-9 choices with superior taste are just there, taunting me. It requires a lot of willpower to resist that, and I don’t consider it 100% my fault when I give in occasionally. Society takes at least part of the blame by consistently presenting me with temptations, exposing me to peer pressure, and making everyday tasks like shopping considerably more burdensome [1].
The flexibility I give myself makes it possible to maintain a long-term commitment even in the face of ego depletion. If it was “all or nothing”, I’d choose nothing. Being 95% of the way there is a huge improvement over that baseline.
I still call myself a vegetarian. It leads to less confusion, and I know with a high degree of certainty that if you just dropped me into a 100% vegan society, I’d very easily adapt and wouldn’t miss a thing. I still have some uneasiness about being called out for “pretending” to be a vegetarian etc, even though this has never actually happened.
[1] Prepared food in particular can be a highly inefficient market. There are products with a high number of properties, you are offered a tiny subset of all possible combinations of those properties, and on top of that it’s often hard to know in advance what you’re getting. This is fine in situations where most properties are either “yeah!” or “meh”, but gets very troublesome as soon as one or more properties become a hard “no”, which happens with meat, but also stuff like food allergies, strong aversions to certain tastes/textures, etc.
A person can always ask for a substitution. This doesn’t work at places like Cracker Barrel, but does at Taco Bell, and presumably many more upscale restaurants.
I’m reading Vaclav Smil’s “Creating the Twentieth Century” and struck by the extraordinary profusion of innovations that came out of the last two decades of the 19th Century (the middle of the period he describes). It struck me too that you could make a good argument that the art, music, literature, architecture etc of that time were similarly extraordinary, with an unusually high number of timeless classics per year– admittedly my personal affection for Dvorak, Sibelius, Art Nouveau/Secession, ragtime etc biases me here.
And yet the political and economic landscapes of that era were horrible. The Long Depression and the labor wars; the rise of authoritarian nationalism, anarchist terrorism, and brutal imperial reaction to the prior two; the spread of socialism among the intellectual classes; the institutional entrenchment of pseudoscientific racism. Must have been a frightening time to live through.
I think you could say the same about the interwar period on all counts: extraordinary technological and scientific progress, well above average artistic creation (with the same caveat about personal taste biases), and of course the political and economic horrors go without saying.
My questions for the room are:
1. What’s the probability that we are living in a similar time today, i.e. a time that 100 years from now historians (conditioned on historians still existing) will view similarly along these axes to the Long Depression and Great Depression eras?
2. If we are living in a similar time, what should one do about it? More specifically:
(a) selfishly, what lessons should we learn from those times about how to insulate oneself from political and economic problems?
(b) less selfishly, what lessons should we learn about how to capitalize on the unusually great opportunity to be part of technological and artistic flowerings that will greatly benefit future generations?
I think your period is too short. The late 1800s are, to my mind, a solid improvement over the mid-1800s, which had civil war, regular war, and massive social and economic unrest across all of Western society – if the worst we have to deal with in 1885 is a few anarchist bomb-throwers and some intellectuals talking with each other about this new guy Marx, I don’t know that 1880s Me would agree it was all that scary a time to be alive.
Fair point!
Agreed. 1840s – for example – seem far scarier.
According to nearly 70% of Americans, the United States is on the verge of civil war. This is a shockingly high number to me.
https://www.washingtonexaminer.com/washington-secrets/battleground-7-in-10-say-us-on-the-edge-of-civil-war
Is the feeling on this board that high? Either a majority of American posters here agree with that statement or we are wildly unusual. Both are plausible, so I’m curious which it is.
From my perspective, I view a second American Civil War as between very and extremely unlikely. The current political situation doesn’t have a natural center for a non-Federal power to emerge like Richmond was in the 1860 and neither side is anywhere near actually taking up arms. Both still have faith that the political process will come to some sort of accord.
Edit:
@drunkfish has some concerns about the reporting on this polling. Please read his comment before responding. Thanks for digging in.
I say we are wildly unusual, I see it as having a ~0% chance of occurring in the next eight years. I think popular concern about it comes down to three things:
1. People watching the news and not understanding journalism’s incentive structure.
2. People who know it’s over-hyped but who due to social desirability bias want to appear “concerned” in order to distinguish them from the idiots who don’t pay attention to politics at all.
3. An abstract wish for big booms to occur due to anger at stuff like this,(http://www.unz.com/isteve/father-cant-stop-his-ex-wife-from-giving-their-son-puberty-blockers/) though most wouldn’t welcome an actual war, which isn’t going to happen since no faction of the elite cares enough.(GOP elites care mostly about tax cuts and Gee-Oh-Politics.)
That article is clickbait. The son is not being given puberty blockers any time soon.
I don’t like childhood transition, but this case is overblown.
“The son is not being given puberty blockers any time soon.”
What do you base this on?
Apparently the judge overturned the jury’s decision. But the initial decision was actually horrifying and should shock and scare everybody. I dont know at what age the kid would have started taking blockers but this shouldnt be a reason to accept this nonsense.
The Washington Examiner was the initial source of the rage. And sure enough, it is the one that Unz is using to further their rage.
But the Washington Examiner stealth edited their article.
If you care, you can follow the links here
and compare for yourself https://twitter.com/ClenchedFisk/status/1187406488123400196
Or was this your point? People get enraged at things at the very first news stories and resist listening to corrections that their enemies are not pure evil? CS Lewis had some words about that.
There is a correction of sorts in the middle of the post, “(Initially I thought the medical intervention would occur now, at age 7 — sorry about the error in my initial post, but is age 11 really any better?)” ideally it would be in red letters at the top.
“Or was this your point? People get enraged at things at the very first news stories and resist listening to corrections that their enemies are not pure evil? CS Lewis had some words about that.”
No, the “correction” doesn’t really make the story any better. This is a common tactic when you don’t want to directly defend something: find some detail about the original claim that is incorrect and then declare it “fake” or “clickbait.” The gist of the story was entirely accurate.
I don’t like childhood transition. (I suspect that in a decade when we have then-adults who desist and say they were reacting to their parents explicit or implicit choices that are happening now.) But this case is at least 3 years from anything happening.
We give parents wide latitude to raise their kids, including making bad choices.
I think that headline might be *wildly innaccurate* (to the point that if you see this in time I think you should probably edit your post).
The first two lines are:
Those two statements bear almost no resemblance. I went to the only survey they linked, and the most relevant-sounding result is slide 17 here http://politics.georgetown.edu/full-graphics-and-slides-october-2019-2/, which says
and gets the average result “67”.
That headline is so misleading I think it discredits the entire site that posts it. “Most people think we have a lot of division” and “most people think we’re ON THE VERGE OF A CIVIL WAR” are incredibly different statements, that don’t belong in the same breath.
Even by the very low standards of poll reporting, this is egregiously bad.
The polling firm itself has a reasonable reputation, so the data is presumably decent until proven guilty. But chalk this up as another indictment of uncritically repeating headlines.
I was able to edit, so I added that. I didn’t read the poll itself, so I really appreciate your deeper dive.
The number did seem shockingly high to me, so now I know why. Thanks!
Yeah honestly I only dug into it because the number just didn’t parse at all.
Thanks for the edit! I suggested that since I figured you weren’t intentionally sharing nonsense. Now I’m kinda curious though, if asked to speculate on why americans put that number so high, what people in this thread would have come up with to justify it…
I mean, the fact that Americans on average think we’re 67% of the way to a civil war is still bad, it’s just a different kind of bad.
And I suspect that average came out of people’s rear end, mostly. I don’t think it actually had much to do with numerical reasons.
@EchoChaos
I have a really hard time deciding what my own answer to that question would be, to be honest. If I parse it as “the probability of a civil war in the near future”, it’s low single digits (for the next 10 years I’d put it below 1%). If I parse it as “The level of disagreement between you and [charicatured outgroup member] compared to the level of disagreement between two sides in a civil war” then… Maybe I do put it above 50? I just don’t understand a quantitative scale running from “agreement” to “killing”, as if those share an axis.
If you have two axes, “level of disagreement” and “willingness to use violence”, then I think there’s a compelling argument that on the disagreement axis we are pretty far along, just by virtue of people on both sides often being entirely unwilling to even engage with the other side.
I think the way the question is asked basically forces its result, because it’s just so incoherent.
I don’t read that as being 67% of the way towards a civil war. The way I interpret the question is that a 0 would mean everyone in the country has the exact same political beliefs, a 50 means there are multiple beliefs, but people mostly work together and are generally willing to compromise, and a 100 is all-out civil war.
In this reading of the question, a score of 67 puts you 34% of the way to a civil war, which I think is still too high, but it’s not as unreasonable.
BREAKING NEWS: The majority of Americans believe that our country is already a third of the way towards an Orwellian single-party state!
This is a better headline and got a good laugh. Thanks.
Ugh, the Washington Examiner is trash. I haven’t read the article yet, but my skepticism is high.
*reads article*
Yeah, what everyone else said. The poll doesn’t say what the headline says it does.
I have updated my prior on all stories by them with this new fact.
While we’re on the topic, just wanted to say I was starting to worry about future civil war a couple months ago (due to the podcast It Could Happen Here), but Scott’s book review of Secular Cycles made me think again. We may be at a divisive time, but in a broadly well-off economy I don’t see very many people wanting blood.
@EchoChaos >
Sort of related, about a year ago Harper’s Magazine had a piece on a “progressive states’-rights strategy” for the tenth amendment titled: “Rebirth of a Nation
Can states’ rights save us from a second civil war?“ by Jonathan Taplin, much of which is standard anti-Trump/anti-conservative rhetoric, but I was reminded of similar “progressive state’s rights” essays during the Bush administration, it seems that whenever the other Party has the whip hand of the Federal government the opposition rediscovers the 10th amendment (similar to how the Filibuster is a good thing when one’s Party is the minority of the Senate).
On it’s facade the gist seems plausible as more self-governing States would seem to lead to less of a need to battle over the national government, but there was far less of a Federal government in 1860 just before an actual civil war.
As far as the chances of an actual shooting war I’m pretty doubtful, who would fight it?
If it isn’t from keyboards I just can’t imagine ir happening, there just aren’t enough warriors for a war.
I think my favorite thing in that poll are these two back-to-back questions that got 80%+ “Strongly agree” and “somewhat agree” responses.
The other side needs to compromise with my principled leaders who should hold the line.
Makes sense to me! 🙂
That cracks me up!
Compromise usually means good guys (that is, you) lost, or at least failed to achieve the goal fully.
Speaking only for myself, I think a civil war in the US is vanishingly unlikely if we define a civil war as being between two well-supplied military forces. I could be missing something, but I don’t see such a neat division inside the country, and I suppose I’m too used to living in the last superpower to imagine other countries giving military aid to one or both sides.
I *can* imagine a long nasty insurgency.
In the modern era one can think of a Core, Fringe, and Middle in the following sense:
Core: Wall Street, D.C., Hollywood, politics in developed countries, upper-level military in developed countries, cutting edge companies, Harvard.
Middle: Suburbs, Middle America, the working and middle class in the developed countries, everyday occupations.
Fringe: Developing countries, military expeditions to developing countries, survivalism and rustic living, black and grey markets, impoverished areas of developed countries, political ideologies and cultural traditions not popular with developed country elites, autocratic developing country politics, Antarctica.
In science fiction visions of the future, you find a similar Core/Fringe distinction:
Core: Ecumenopolis, interactions with highly intelligent AIs, high-level government and politics on the most developed and powerful planets, galaxy-spanning corporations, Earth as the Galaxy’s capital.
Fringe: Newly terraformed planets, underdeveloped planets, aliens resistant to the technology and culture of more powerful species, autocratic alien species, interactions with robots as menial laborers, expeditions to poorly-known parts of the Galaxy, war on distant planets.
Science fiction seems to have a bias towards the Fringe in its storytelling. Often the work starts with the character in the Core being “bored” with the Core and seeking out adventure in the Fringe, where the remainder of the story takes place. Or it starts with a character in the Fringe and features a chapter or two in the Core, where the Fringe-originating character feels uncomfortable and is happy to escape.
Imagine that many facets of industrial society were correctly predicted by pre-industrial writers, but that they focused mostly on the Fringe. A book would start in suburban America with the main protagonist being bored and seeking adventure in the Fringe, it not occurring to him to seek adventure in Hollywood or Wall Street. Corporations would often be seen but usually as an external force oppressing the protagonist, rarely would the protagonist be inside the corporation itself. Migration from the Core to the Fringe would be more common than the other way around. Portrayal of the hellish(to a pre-modern observer) density of modern manhattan would occur, but rarely as a permanent setting, with most of the action taking place relatively close to nature. It would make sense to portray the Fringe more often than the Core, because pre-modern readers would be able to better identify with the Fringe than the Core. But this very thing makes the Fringe less interesting than the Core.
What are some works of science fiction that portray a far future taking place mainly in the Core? The Age of Em(not a work of science fiction obviously) does a good job of staying firmly in the Core, speaking mostly to the experience of ems,(who will dominate the world) and not to the experience of humans,(who will be on the Fringe of the world) yet this very fact was the subject of complaints that it should have focused more on humans and less on ems, who readers were less able to identify with.
Asimov’s “The Caves of Steel” comes immediately to mind; its sequels concentrate on the fringe. I believe the first _Foundation_ novel was also pure Core.
Not sure whether it’s really most of the movie, but The Phantom Menace should be mentioned for the prominence of Coruscant. Like with The Nybbler’s example of Trantor, it’s an ecumenopolis.
ETA: the distinction is not at all geographical, but Ada Palmer’s Terra Ignota series is very much Core.
I was helping my cousin with his math homework the other night and quickly realized that he had no idea how to do any of it. This kid is in special education, but his teachers don’t bother having him demonstrate proficiency in basic problems before moving on to the next step. So I spent the next thirty minutes ignoring the homework, trying to reduce the problem down to its most basic, hoping that if he understood that, he might be able to figure out the rest. But it didn’t work and I ended up coming up with a rote technique that he could use instead. It worked for some of the problems, but as soon as he got to something even slightly different, he was back to square one. I told his parents about it and they emailed his teachers, saying it was too difficult. After spending about an hour solving four problems, we gave up.
I realized that this exemplifies math education as a whole. I was always annoyed that they would teach us some technique, use it for a few examples, and then move on to the next thing, even if we had no understanding of what we were doing. But now I understand. They’re teaching to the lowest common denominator and it’s the only way to move on. Of course, those on the lower end aren’t really going to understand it but it doesn’t really matter. They are going to be confused in school, but once they graduate, they’re done with advanced math. My cousin isn’t going to college. He knows how pointless it is for him to be taught about algebra and geometry. Those on the higher end probably aren’t going to be hurt that much by this either. They have a better mathematical intuition and can figure it out for themselves. It’s those in the middle that are probably the most hurt. If someone tried to get them to understand the principles, then the techniques would be easier to pick up and retain. How many people are bad at math because no one bothered to explain what they were doing?
Math has the property that if you don’t actually learn lesson N, you will often be able to struggle and get by via memorization and plug-and-crank for lessons N+1 through N+K, but then somewhere later you will be really screwed by your lack of understanding of lesson N.
There’s a stark conflict here between the platonic ideal of making sure the student fully understands each concept before moving on to the next, and “you have 180 days, 24 students, and need to get through these topics by the end of the year.”
Right, but surely we can strike a better balance than what we do now. What’s the point of going through those topics by the end of the year if the students don’t get it? It’s even worse if the student only needs to a little bit more time to understand N, which makes it easier and faster for them to learn N+K. If they’re constantly backtracking, then not only are we farther from the platonic ideal, it’s just vastly inefficient.
That’s the future teacher’s problem, though. And, because school dates are set well before the teacher gets the student, they don’t have the flexibility to take “a little bit more time” to teach a particular student. Summer vacation is going to start at the same time, and the teacher’s going to be held to whether they taught all of the things they were taught.
A few iterations of this over a couple years, and you start seeing 5th grade teachers that can no longer paper over the built-in deficiencies baked in by prior teachers just passing somebody because they just barely understood.
I’ve wondered if we couldn’t improve outcomes by having teachers follow a cohort of students through a school, rather than having each teacher teach a particular grade. That is, in elementary school, a teacher would start with students in 1st grade, then when they move to second grade, the teacher would remain with them, all the way to 6th grade when they move to Junior High. Then, the teacher would pick up a new cohort of students in 1st grade.
It would reduce the incentive to just pass somebody if they only barely understand something (or don’t understand it, but can just be made to pass the test), because that deficiency doesn’t become somebody else’s problem. It’s just pissing into the wind from the teacher’s perspective to handwave missing fundamentals early on, because that’s going to come back to bite them in a couple years.
That is probably a nice system, but also probably relies on more knowledgeable/skilled teachers than we can count on having (because they now have to be able to teach 6 years worth of curriculum rather than just the same year over and over), as well as less turnover. I’m not sure you can count on the same person still being a teacher for 6 years, much less still being a teacher in the same place.
@acymetric or having the temperament for dealing with 6 year olds and 16 year olds.
Only through Junior high, which would be somewhere in the 11-13 range rather than 16. Although that only helps a little.
The continuity of the relationship might also be helpful (although I could see where it would also cause problems…if a teacher decides they don’t like you and you’re stuck with that teacher for the next 5 years that’s a tough break).
> in elementary school, a teacher would start with students in 1st grade, then when they move to second grade, the teacher would remain with them, all the way to 6th grade when they move to Junior High. Then, the teacher would pick up a new cohort of students in 1st grade.
Is done in Waldorf schools.
The huge disadvantage is that the kid is stuck with a teacher in an extremely powerful position for (IIRC) 8 years.
No hope for a fresh start with a better teacher — who explains better, doesn’t hold subconscious prejudices, etc.
And that’s where the “24 students” becomes a problem. Because student A can learn topic X completely in three days and will be bored out of her skull if you spend longer than a week on it, student B needs a week to mostly get it and another week to really understand it, and student C will never do better than guessing the teacher’s password no matter how long you spend on the topic. Oh, and there’s also students D through W.
Even if you did only have to worry about one student, having a teacher follow them through the grades would only push the problem up a level. Now instead of having 180 days to teach the standard first grade curriculum, you have 900 days to teach the standard elementary school curriculum. (Or 12 years to teach the entire curriculum.)
More tracking/leveling earlier might be a partial solution to these issues, but it raises problems of its own. Even setting aside standardized testing, you wouldn’t want to doom a first grader who has trouble with subtraction to permanently be on the “slow track” for math, never able to catch up to his peers. And if you want switching between tracks to be possible, then each track has to cover the same general core curriculum anyways.
From a god’s-eye view, I think the obvious solution at the high school is to encourage greater specialization–students who don’t want to pursue a STEM career don’t need calculus; students who don’t want to be english professors don’t need literature analysis. Elementary education is a harder problem.
@thevoiceofthevoid
Yes, but that is why the “slow” track is called slow. It’s more or less the same material, but taught at a slower pace, with more repetition, practice and less abstraction/more object level.
Then moving up a track merely means a slower progress in total than if you would have been on the fast track all along, assuming that the student is able to adapt to the faster pace & higher abstraction.
For example, top level is A1-A5. Lower level is B1-B5.
Then a student that follows the fast track entirely would spend 5 years at this stage of education: A1-A5. A switcher could spend 6 years for the same material: B1, B2, B3, A3, A4, A5.
@thevoiceofthevoid
The main problem with having too many students in a classroom is disruption. If, for example, 1 in 20 students on average is the type that will keep yelling stupid things at their classmates during the lessons (or start playing loudly random YouTube videos during the computer science lessons), then statistically classes with 30 students are more likely to be disrupted than classes with 25 students, 20 students, 15 students. The situation is more about the greater likelihood of getting an extreme disruptor, than about increasing complexity of teaching greater amounts of non-disruptors. (Like, the class with 30 students can actually become quite okay for a few days when that one kid suddenly gets sick and stays at home. The mere difference between 30 and 29 students in the classroom is not enough to explain the magnitude of the change.)
The second greatest problem is having kids with different abilities and interests. Now this could in theory also happen in a class of 5 students, if you’d get one Math Olympiad winner, one quite good learner but not deep thinker, one bad learner who is mentally average but considers learning boring and had bad teachers at previous grades, one literally almost retarded kid (but only almost!; that’s why the kid is in your class), and then one weird kid with some combination of autism and schizophrenia who also happens not to speak English as his first language. Now go ahead, and prepare a lesson all of them could enjoy together.
(Again, a greater size of classroom makes it statistically more likely that something like this will happen.)
Yes, tracking/leveling raises all kinds of problems, but the ignorance of different abilities and attitudes solves nothing. No child is left behind, but also no child gets too far ahead of the last one, unless they also take private tutoring, in which case they mostly waste their time at your lessons.
So we should rather think in the direction of how to fix differential education, so that e.g. being slower temporarily does not create compounding permanent effects. Maybe make a difference between “slower learning (of the same content)” and “dumbed-down learning”? Perhaps we should make everyone learning at their own pace a norm, not an exception; even if that creates some logistical problems for the school. Like, instead of “being in the 4th grade” you could simply attend “level 5 math” and “level 4 chemistry” and “level 3 history”; and your classmate would have it the other way round.
One thing that occurred to me, is that this isn’t the way it works. Some classes are very, very quiet, others are very loud, and my hunch is that this isn’t consistent with a random distribution of troublesomeness among students. Rather, disruption is to a large extent a function of the dynamic between students.
Schools could combat disruption by rearranging the classes, ideally with some understanding of the psychology involved, but at worst, just randomly breaking up disruptive milieus would probably help.
(Epistemic status: guesswork)
@Viliam @Ketil
Throughout my K-12 experience, distracting students were never really a problem, though stories I’ve heard suggest that they most certainly are in some classrooms. I suspect the probability and level of distraction are function of a number of things, but most importantly of whether the teacher can credibly threaten real punishments for any and every student who disrupts the class. If detentions aren’t enforceable, students with a tendency to misbehave will misbehave. If half the class doesn’t give a damn and misbehaves, the teacher is screwed if they can’t send half the class to the principle’s office. If getting sent to the principal has no real consequences, it ceases to be an effective threat.
The teacher’s job is also a lot easier if the vast majority of the students are predisposed not to be disruptive.
@Aapje
That kind of system sounds a lot like what I’d design as education czar. Abolish the “grade” system; have slow, standard, and fast tracks for each subjects. Switching from a slow track to a fast track probably entails going back slightly in the sequence to repeat some material because of desynchronization, but going at the faster pace from there on. Then possibly branch out into electives at the former “high school” level with prerequisites of a certain progression through the track.
Additionally, abolish the idea that every high school graduate ought to know calculus (as I suggested above). If you decide to stay on the slow track for a subject, you’re not going to get as far by the time you graduate, but you probably won’t need an in-depth understanding of that subject anyways.
Needless to say, this would not be possible to implement in the current (US) education system without sweeping reforms.
Additionally, abolish the idea that every high school graduate ought to know calculus (as I suggested above).
In 2018, only 19.3% of students graduating from high school in the US had taken calculus, and in fact, only 48% of high schools even offer calculus. Source: National Science Foundation. The idea that every high school student should take calculus is already well-and-truly abolished.
@littskad
Huh, guess I should be more careful about generalizing from my personal high school experience (public school in a relatively rich town where pretty much everyone was expected to go to college).
When I was in (a Quaker) elementary school, all our math was taught through Individually Prescribed Instruction (IPI). Each student took a placement test at the beginning of the year to assess their knowledge. based on the results we’d each individual would work through new concepts, and once we proved mastery, we’d move on to the next concept. This allowed us to all work at our own speed and level. Our work was all hand-graded. I imagine today with computers, this would work even better.
What was the teacher’s role in this system? Was there anything resembling traditional lessons/lectures or did you all teach yourself out of textbooks?
If I remember correctly, she would provide personal assistance, if a student had repeated difficulties in mastering the concepts. But I think it rarely happened. Mostly she was scoring how we did on the various skills and tests.
Hahahaha, man, I can’t even get college graduates to understand and approach things conceptually. This is not an easy mindset and most people cannot do it. Most people just need to get the rote mechanic and you’re lucky if they can perform simple problem-solving within their simple domains.
I actually just had a conversation about this over lunch with a coworker. Standard griping. Even at manager level, some managers struggle with this concept. I don’t think it becomes reliable until you start getting at senior manager level or above.
I think many schools, at all levels, are not teaching problem solving and critical thinking skills nearly as much as they used to, and may even be doing things to actively discourage these skills.
Among my friends/peers, the majority of college graduates don’t have the ability or presence of mind to think anything through critically. Those who have gone only as far as a high school education do much better in this regard. I’m not sure exactly why this is, but may have something to do with being faced with and exposed to certain realities of life sooner than college-bound students since schools are no longer emphasizing these skills.
Additionally, I have aunts and uncles who have been college professors for several decades and they have also noticed and discussed with me about their observation that many of today’s students lack many of the critical thinking skills that used to be much more common among college-level students.
I feel like you might be overstating this. These people literally can’t think critically about things, or they just don’t think critically about the things you think they should be thinking critically about?
I am capable of thinking critically, but there are tons of things I don’t think critically about. Some of it is laziness, some of it is because I don’t feel the given thing is worth the effort, some of it is because I don’t have time to critically evaluate everything everywhere all the time. Some of it is probably unintentionally or at least partially unintentional critical evaluation of an area where I have a particular bias that would be challenged. Probably plenty of other reasons.
I never said they “literally can’t think critically about things” I said that the either don’t have the ability or presence of mind to think things through critically.
I don’t expect anyone to think about everything critically. I’m mostly referring to things that are important to them or things that they are required or need to do, whether that be in their home, work, or school lives. I’m talking about situations where they have a problem and solving it is their goal, but if the first solution they try does not work they never even consider a different method, much less try one.
I can’t get people to think critically, understand high-level goals, or comprehend processes in their own job functions. This isn’t me asking “so what do you think about the Many Worlds Theorem,” this is me asking “What would you say you DO here?” and them not understanding how they fit into a team, or how the overall process works.
This is typical for any staff position, not unusual for middle management, and usually not at all present in senior management. This, IMO, is one of the big divides between the Big Wigs and the peons.
I know a guy who works in a consultancy, and he frequently encounters entire teams of people who are unable to do a simple task like: “Describe, in two sentences, your role in the company”. People are unable to distinguish between the tasks they do, and the role they play.
What you focus on as the question asker may very well diverge from what’s salient to the answerer.
An awareness/focus on “teams” in particular (and social context in general, which often would include a focus on the inter-relatedness of the components of a process) are a facet of personality, not intelligence or analyticalness. For a more in-depth take on this, I recommend reading about the “products” of Guilford’s “Structure of Intellect”.
Be glad that you actually have a diverse employee body, because I can guarantee there are things that you are missing that they are picking up on.
(Aside: Given the political nature of promotions, it’s not surprising to me that the vast majority of exec-level people would be socially-contextually aware. This may actually be a warning sign, though, that there’s too little intellectual diversity in the higher-ranks, not a signal of the appropriateness of their ranking.)
(Additionally, depending on how low-level a person is, they may be effectively socially excluded from knowing the larger context of their work.)
I really don’t think there’s a trade-off. This isn’t social knowledge. Staff level who don’t understand how they fit in on a team don’t have a problem with social skills, and most have no problems understanding social nuance or managing relationships (otherwise they would have lost their jobs). They just don’t have the ability to understand and improve complex systems.
Our middle managers all started out at staff level and succeeded there before moving into management roles. Part of the reason they were promoted was because they can fix systems and understand how they fit in on a team. However, working in a factory is pretty cross-functional, so this isn’t analogous to all organizations. I’ve been in other companies that were heavily siloed and nepotistic, and middle managers did not always have this skill set.
That’s not to say that these people are not useful. They undoubtedly have specialized knowledge that other people do not have. However, they do not really grok “why am I doing this?” which means they cannot answer “how can I improve this.”
I would need pertinent examples of what you mean by “systems” and “understand how they fit in on a team”. Both of these imply the need for contextual awareness. As an example of what I’m getting at: My personal contextual awareness is not that good, but my implicative, cause-and-effect awareness is pretty darn great (at least on the job).
Obviously, not everyone is capable of the same level of contribution (even at the lower levels, some people are far more capable at various sorts of work that would be considered impossible tedious by most higher-ups). However I can’t tell whether you’re getting at that kind of division, or whether you’re genuinely focused on a matter of personality salience.
It’s possible your workplace really doesn’t need the other kinds of salience, so they aren’t recognized. Part of living in a society of non-integrated businesses allows each business to totally outsource various important things to its suppliers or customers.
I can give you some pertinent examples in a similar context to ADBG’s experience, though I can’t speak to how this shows up outside of manufacturing oriented organizations. I work in the engineering group for a large aerospace development program as a structures analyst for our avionics (aerospace-ese for electronics) group. My role & my day to day tasks, while related are very much not the same thing. The headline objective of my role is to provide an engineering assessment that the the avionics system will meet X life cycle with Y reliability.
What that looks like from a day to day perspective is some combination of a) gathering information on the proposed design (e.g. geometry, methods and materials of construction, expected environments, service requirements, etc.), b) performing finite element & classical analysis modelling, c) looping information from the results to inform the design (e.g. starting the whole thing over again, happens roughly every 3 months). However analysis of modern avionics systems is a tricky business, as modern electronic component design has advanced and we start talking about having tens of thousands of individual components on a given electronic board, many of which you only have sketchy details about the construction of gleaned from a manufacturer’s data sheet that’s properly aimed at providing the sparkies (electrical engineers) what they need to design the circuit. On top of that, the dynamos (dynamic loads) group is only guessing at what the right random vibration spectrum to be applied to your box is (and it changes every few months as the box location changes, the stucture it’s mounted to is redesigned, and the propulsion system turns out to have different characteristics then were guessed at the start of the program) and try not to think too hard about the shock environments because there really isn’t a great way to model their effects anyways. At the end of the sausage making, I produce a stress report, which covers the expected construction of the design, the environments, and the expected performance of the unit.
However, after spinnning up all of that work, nobody really trusts the analysis anyways, so the box gets sent to the environmental test group for shake & bake testing, where they’ll expose it environments enveloping those predicted for it’s useful life (usually with some fudge factors to cover for underpredictions on the loads and variability in the construction), and if it passes then the box is good to go and we’ll fly it. So if the only real arbiter of was the design mechanically sound enough to fly was the test results, why pay me $$$ to sit around and hold up the design process with constant requests for more time to do analysis? Well at the simplest level, because the company process manual says that all flight hardware must meet the requirements of the Structural Assesement Plan (SAP), which in turn requires that a structural assessment be performed to on all flight hardware (stepping up a level, it’s a company requirement because there’s a requirement in both the governement RFP that we are responding to & from several of the regulatory agencies that we have to be blessed by to fly). Go one level deeper and it’s a cost/benefit trade for our organization, because if my analysis can provide some insight into the mechanical performance of the box, we might be able to pass the shake & bake testing on the fifth attempt, instead of 43rd attempt like was done on one the boxes that was designed when the organization was still in startup mode rather than the mature aerospace prime approach it’s trying to take on the current (order of magnitude larger) project that I was hired on for (e.g. trade test dollars vs. engineering dollars).
That’s still a pretty simplistic view of what my role is though. It’s almost tautological, but when you do an engineering test, you only get the results that you tested for. That means that you can demonstrate one (or practically maybe a dozen) test objective pretty well, but means that you don’t really have much insight into what happens when you start to deviate from the test conditions (say a supplier of a certain specialized electronic kit decides to double their prices after shifts in the rare earth metals market make their previous price uneconomic and you want to switch to a different design). By anchoring analysis models to the test results and analzing the effects of a change (say you had to change a few electrical components and move a mount point a quarter inch), I can provide useful information about the magnitude of the impact of the change, which allows us to make a basis of similarity argument and avoid sending the design back into testing. Note that this is a bit of a dangerous game, and is where a lot of the engineering failures that make the news happen (I would at least in part attribute the engineering aspects of the MCAS problems that Boeing has been having to this part of the process). Wait a minute…..why do engineering failures happen here, and why when my role is “ensure x life w/ y reliability” suddenly a lot more vulnerable here? Well, this sort of thing generally comes up pretty late in the project, when meeting delivery schedule is king so is it because the engineering is rushed to paper over the problem and get the vehicle in the air? While I’ve felt pressure to get the job done, aerospace has a pretty strong safety culture so I do have the time to make sure I’ve done my work to a point that stratifies myself & the review process. Why then when a bunch of motivated, smart, experienced individuals spend a lot of time and effort do we still get it spectacularly wrong sometimes?
Time for another tautology, complex systems are complex. In order to fight this, we generally take a “cheese cloth” approach to safety, which is basically the idea that have several independent layers of checks if something is good or not so that even if one process check only catchs x% of the errors, once you’ve gone through three or four gates you’ve caught 99.999….% (adjusts need for the application) of the errors. Structural analysis, testing, manufacturing process, inspections, audits of all such all form some of the gates, where we can aggregate the collective knowledge of a very large, intellectually diverse, and experienced group of people. In the scenario above, while the analysis step is completed correctly, we just did so in a way that cut out the testing layer of cheese cloth completely, and generally some of the other layers as well (e.g review of the findings for other avionics boxes on the vehicle for common/integrated failure modes, late breaking changes mean that the tooling won’t be ready in time to build the first production vehicle and so it is built by non-standard processes, etc.). Therefore, my specific role is to meet the right balance of catching x% of errors, balanced against the cost/time it takes to so, the balancing of which is the key role of the program management team (and so while I make snarky comments about how poor the mechnanical design process was before they created my role at the company, it may have made sense for the project in question…..though I’m convinced they just hadn’t caught up to where they needed to be as an organization quickly enough and that was only true three or four projects back). That also says my role has a broader impact than it would appear from a look at the mechanism of →take design information → analyze design → adjust design and repeat as needed, as I have a role in defining & auditing the effectiveness of the process steps that come before an after me, particularly for things that have subtle effects on structural performance. For example, when building printed circuit board the electrical team will define what layers of the board carry electrical signals or power and will route traces of copper to accomplish these purposes. However, depending on component placement, this can leave large sections of a given layer without very much copper required to meet the electrical function of the board, so the electrical team stops caring about what happens in these regions. However, when we go to manufacturing, this causes problems because having large bare patches makes it difficult to control the assembly process, and so commericial fabrication houese will traditionally allow “thieving”, which is to say with X clearance to the customer defined traces, they can add copper to fill in the gaps and make the board manufucturing more reliable (e.g. lower scrap rate). For most applications, this isn’t a big deal, but when you stick the board in very nasty mechanical environments (say attached to an engine), the changes in board mass/stiffness can be signficant. Now this isn’t a problem for the early design, because your first board is manufactured by the same fabrication house that is going to build your immediate demand (and they have their own internal consistent guidelines for how much thieving to add, so your boards always come out pretty close, and while your analysis model never even considered this, you correlated it to the test results and made your predictions), but what happens when three years down the line when you switch suppliers and the amount of thieving they need to meet their scrap rate is different (because their manufacturing process different, and it’s not something they provide a lot of insight back to their customers about). This may have been traditionally handled in aerospace by a blanket ban on allowing any thieving at all, but that only survived as half remembered requirement from some veterans that were hired from other organizations and initially carried over because that’s the way it is done and we don’t have time to be worrying about it. Then a few years and programs down the line, someone looks at the requirements and asks the appropriate question of why are we doing this weird thing that all of our suppliers are telling us makes their life painful, slow, and expensive (which they are billing back to us for) and that none (or practically none, aerospace is way less than 1% of the demand for electronic hardware). Without a structures (or at least someone with the structures background) as part of the review team for that decision, it’s unlikely that anyone will have realized that it serves an important role in keeping the link between our “cheese cloth” layers of design, analysis, and testing anchored to the manufacturing & inspection processes.
Multiply this by the thousands of decisions large & small that get made in a typcial day on a development program, and an important part of the role of every part of the team becomes attending meetings & briefings & socializing the changes they are making because even though 99% of the time you think you’ve caught the implications at your level, you don’t know how they will spread, though this then has to be balanced against spending so much time in meetings you never get any of your own work done.
The long and short of it is that none of this stuff is obvious to the young college graduate engineer (and I even had the benefit of some special studies classes my program was trying out to address this gap), and it’s not something that’s born from experience, because while you can get really good at your own part of the picture that way, you don’t have any need to understand the cheesecloth to do your day to day job perfectly (e.g. I could do a full and complete analysis of every detail of the box design, make sure it had all the right requirements on the engineering documentation, and send it on it’s way), and yet never figure out how to improve the overall effectiveness of the organization because we really could have made change X if with mitigated it with tradeoff y and the benefits of x were greater than the costs of y, and even though I don’t know anything about x, I can provide the relevant information to the decision makes on y, even though that’s not something I need to do in my day to day job.
And as a further note, while my experience is within the engineering team at the prime level (so the the group where this level of salience is probably most useful), it has has practial implications throught the business process out to the tertiary and beyond sub-contractors. Sitting at the top of the food chain, we constantly get fed stories from the factory floor, from installers, inspectors, auditors, cleaning staff (infamous, as changes in dirtiness are pretty good indicators of knock-on effects), and beyond of “something just feels off”. Even if you aren’t aware enough of how the domino chain fans out from your role, having enough awareness of what your role is to not only notice that things are different (both for the better and worse) allows you to more effectively react to those changes by raising red flags and adjusting priors as information comes back that things really are supposed to be different this time. Building things is very much a team sport, and understanding what your role is beyond merely what your task is at the core of teamwork
This is a good reason to mandate getting feedback from all the downstream stakeholders in a decision.
I presume that, if any such feedback is mandatorily sought, it is generally sought at the manager level, and the workers are, at best, just asked what impact the decision will have on their job, without being made aware of what point in the process their job is.
Those sorts of expectations are likely greatly responsible for the “social exclusion” I mentioned earlier.
Yeah, when few enough people are employed it’s impossible to find time for cross-training, despite everyone believing that cross-department training is important. This is an issue whenever organizations seek to maximize profit-per-worker by minimizing number of workers.
So it’s no surprise when the people who are aware of the context of their job in the entire process are those who are more politically or socially inclined. They do it in their free-time at gatherings, or somehow convince the organization to pay for social events that allow them to learn in a manner most congenial to their natures.
Why are the organizations willing to pay for social events and social groups and not for cross-training? Perhaps because the higher-ups controlling the purse-strings are socially inclined themselves, and do a gut-based expense-justification check instead of an true value-added analysis? Perhaps because it’s easier to justify an expense if a lot of people gather together and ask for it, than if a lot of people individually ask for it?
Sure, but even if someone does understand this, it doesn’t mean that they would effectively communicate their understanding of it if it’s not something that’s salient to their personality.
For some folks, I think getting the rote mechanic down is a fine start, then you can build a conceptual understanding on how they already know how it works. Like, you can show some folks the magic black box known as the quadratic formula, and when they’ve used it a hundred times and know it in and out, show them how to derive it from completing the square. Other folks might be better served starting with completing the square and then handing them a formula afterward.
I was really well served by the former in my math classes, which makes me a little skeptical of moves toward purely conceptual approaches. Like I totally want a conceptual understanding eventually, believe me, but when I get that first I tend to have a hard time translating it to the math, and soon it’s lost and was a waste.
“Approaching things conceptually” is too strong for what I mean. I don’t expect high schoolers to be philosophers of math, understanding the axioms thoroughly. Those concepts usually fly over my head. I mean just a very basic understanding of what something is before moving on.
Here’s an example where I think the schools do it right. Before children learn multiplication, they have to learn addition. Then, when they learn about “2 times 4”, they have to add 2 together 4 times. You can see them count it out with their fingers. I don’t think eight year olds really grok why, but on a basic level, they have it right. Then after they do that, the teacher will make them memorize the times tables because it’s faster than trying to figure it out every time.(This is how I learned anyways.) At some point in their education, math becomes this thing where it’s like kids are taught to memorize “6 times 7 equals 42” even though basic addition was glossed over in one lesson. They forget what “8 times 9” is, and don’t know how to figure it out.
One of my stepdaughters has the problem that she has been trained by her math teachers to guess the password on her math work. She learns the rote procedures that the teachers lay out for her but understands almost none of it. I try to help her understand what’s actually going on but she is very resistant to getting out of ‘we were given this procedure so this is what I have to do’
I believe her learning process was broken by a really bad teacher of math she had in the 6th grade when I was just her mom’s boyfriend. This teacher would often assign homework that was impossible – not ‘too hard for 6th graders’ but ‘not enough information to solve the problem’. The same teacher (and I guess the whole school) assigned online homework where the problem would be presented, you would give an answer, and you would get immediate right/wrong feedback (either with the answer after every problem or right/wrong now and all the answers at the end). Often the ‘correct’ answers were wrong and there was no way to get them right.
One assignment reported that 1.5 was wrong – immediate feedback that she was doing it incorrectly but at the end of the assignment she ‘learned’ that the correct answer was 1.50. Nowhere on the assignment did it say how many decimals to include.
Another assignment flipped the form of the required answer apparently randomly. Problem 10 required a decimal to 2 places, problem 11 required an improper fraction, problem 12 required a mixed fraction, etc, problem 13 another mixed number, etc. No indication in any problem’s wording of how to present the answer. I found her in tears while working problem 8, having been told over and over she was ‘wrong’. I helped her with the rest, and of course things got worse in her mind as we kept getting more of them ‘wrong’, over and over. We missed 13 of 20, together, but once we got to the end of the assignment and saw the answers, we saw that she had legitimately only missed 2 of the first 8, and of the final 12 ‘we’ had only missed one more. I wrote an impolite email to her incompetent and lazy teacher about this homework that her mother wouldn’t let me send. The problem only existed because the teacher didn’t look at the online work at all before assigning it. It would have taken her 5 minutes to do it herself and see the problem before subjecting her students to it. It was 20 problems of 6th grade math and she’s a 6th grade math teacher! Anyways, the girl decided that year that she hated math and was no good at it. Too bad, she’s a smart kid, really.
My other stepdaughter never had that teacher and is somewhat more gifted at Math and is proud of how good she is at it. Unfortunately, this year she has a teacher as a freshman in HS who is apparently not good at math? The teacher gets hostile when she is asked a question and can’t really explain anything, apparently. At least she’s not doing damage like the other one did.
The really bad news is that neither of these two is the worst teacher my daughter’s have had! That ‘honor’ belongs to a 7th grade English teacher who accused the younger daughter of cheating (actually she accused her of plagiarism, and even gave the definition of plagiarism) on a 5 paragraph essay. The actual accusation is that my daughter and another girl worked on the essay together (they both claim they were told they could do so by the teacher) NOT that they plagiarized another source – writing it together was apparently plagiarizing each other, in the teacher’s view. Anyway, this teacher was fired because she didn’t really teach, she just showed up and handed out worksheets and played on her phone, or didn’t show up at all. I think the primary reason they were able to fire her is because the other teachers had to sit in her abandoned class during their teacher’s prep period while she was off doing whatever. The other teacher’s wouldn’t really go to bat for her. But they didn’t fire her until the year was over, so it was pretty much wasted for my daughter.
TL/DR – the state of teaching is probably worse than you think
Every district/state is different, but I have one kid in special ed and one not, and the teaching methods are very different.
Common Core math tries to help this problem by teaching you a bunch of different (admittedly rote) ways to do each problem (e.g. multi-digit multiplication by adding areas, breaking down into 1000s/100s/…, etc.). This can help get at the underlying concepts.
It gets constant whining from parents because the kids aren’t using the same methods the parents were taught, so the parents don’t know how to help the kids.
Like I said to A Definite Beta Guy, it’s not the “non-conceptual thinking” part I really have a problem with. It’s the lack of foundations before moving on. I didn’t have a conceptual understanding of “22 times 28” as a twelve year old, but I knew how to solve the answer. And if you threw in extremely large numbers, negative numbers and decimals, I didn’t go in to a catatonic state.
Everything old is new again…
sorry…”borrowing” isn’t the standard, simple way to do subtraction!?
In my elementary school curriculum, that was the method that seemed like the holy grail of “oh this is how it’s actually done” compared to…whatever lattices or such nonsense they were teaching us in parallel. But….of course you can just add the carry to the *checks Wikipedia* subtrahend one place over instead of crossing out and rewriting a digit of the minuend one smaller. That video may have just sped up my pencil-and-paper arithmetic if I can get the hang of the old technique.
That sounds somewhat like the system used in my kids’ elementary school 20 years ago. It was called Chicago Math by the teachers, and the process was that they would go over math skills very quickly and then move on to something else. Later they would come to the same subject again and cover it a second time. And this cycle would continue, with them covering every topic quickly but cycling back many times to cover it again. Maybe the teachers didn’t explain it well to me, but it never made sense to me, and neither of my kids learned math very well.
I looked up Chicago Math in the Internet and I get something else. The way education trends come and go, it could well have changed by now. But I wonder if the philosophy of your cousin’s math teacher is similar. But it still doesn’t make sense to me.
The technique is more generally called a “spiral curriculum”.
When planning classes, it’s sometimes made sense to me to do things this way for certain topics, but I shudder at the thought of just doing this for every single topic in a class.
The idea with coming back around to something is that you begin by teaching the basics and how to handle simple cases, and then later on you come back and cover the thing in more detail. You can do this several times. I see three reasons why you might – when the reasons apply – do things in that order rather than just covering the topic once and for all:
1. Maybe once you teach the basics, you’ll keep using those basics as elements of every other thing you do. Say, you teach the multiplication table up to 5×5, then do a unit on word problems where you only deal with small multiplication tables, then you go back and teach a bigger multiplication table. Then all the time between coming back to the topic still helps reinforce it.
2. Maybe the topic involves a lot of memorization and if you try to make the students do it all at once, they’ll just get everything mixed up. I think language classes try to alternate vocabulary-heavy and grammar-heavy topics for this reason.
3. Maybe in between round 1 and round 2 of this topic, you learn a different tool that helps you deal with the advanced cases, but it didn’t make sense to teach that tool first. In other words, the topics in the class you’re teaching all depend on each other in subtle ways, and you can’t just cover one of them in a go.
Nice reply, Kindly. Your ideas make sense. It doesn’t sound like quite the same thing as I understood Chicago Math, though. You say you do the multiplication tables only up to 5X5, and then do word problems on just those. I think that makes a lot of sense. But my understanding is that the teachers would instead teach all the multiplication tables, but too fast for anyone to memorize. But return back to it, thinking that the kids would get it if they keep coming back.
I like your method; I hope I was just misunderstanding the teachers. But again my kids didn’t learn math too well, so it didn’t seem to work. It is possible my expectations were too high. I have very good math skills and my kids are adopted, so not my genes. But I don’t think that’s it.
ProPublica has an article about the most glamorous part of the US health care system: medical debt collection.
https://features.propublica.org/medical-debt/when-medical-debt-collectors-decide-who-gets-arrested-coffeyville-kansas/
Hmmmm….
Looking at spending over the long term will show that the increase since 2010 is at a lower rate than the increase before, which is doubly impressive since many more people are now covered and they’re sicker on average than those who already were; see e.g. Wikipedia. This is talking specifically about patient-paid deductibles, which were increased, as far as I can tell, out of the utterly unsubstantiated belief that patient demand has anything significant to do with medically-unnecessary testing. This is the source of the obviously biased ‘Cadillac Plan’ and ‘moral hazard’ terms from the ACA debates – it’s also completely fixable in statute; write your congressperson.
Cadillac plan was a way to very slowly phase out the tax deductibility of employer-provided health insurance.
The idea that paying for your own care should bring the prices down absolutely has merit, where small/individual buyers are the majority of the market. If the main buyers are still the large large payers, then all this high deductible stuff won’t move any needles. I don’t think “high deductible plans didn’t work as currently designed” is sufficient argument for “AHA! the market solution for healthcare was tried and FAILED! Medicare for All now!”
(I know that’s not what you’re saying; but a lot of people do….)
Only if you actually can shop around. I had a very minor procedure recently and put a decent amount of effort into the question of cost. I literally could not get either the facility, or the insurance company to tell me how much it would cost, either me, or the insurance company.
ETA: In advance, I mean, they were very happy to tell me how much it cost afterwards. To be fair, the nurse on duty was willing to give me a ballpark estimate, which was within a couple hundred dollars of accurate, but she was very clear it was just a guess.
Even without insurance, you pretty much always have to agree in advance to unlimited financial liability for whatever amount they decide to bill you. This is actually worse without insurance because when you’re in-network there’s a set rate (though you usually have no say in what codes are billed and the like).
I don’t know what to call that but “market” does not seem applicable.
Assuming that quantum computers keep improving, what areas of science and technology will have advanced thanks to it in 20 years? How exactly will things be different?
I think protein folding’s one of those things where QC could really help.
@Lambert
Why protein folding? Classical dynamics is extremely good for modeling protein folding. (That’s how molecular dynamics simulations work.) Like most other computational tasks, macromolecular dynamics seems like a great example of something that shouldn’t be helped by quantum computing.
Where’s it in the complexity heirarchy?
I just feel like it’s the kind of thing where i’d not be surprised if it was in BQP.
I’ll take back my comment, since quantum annealing should be applicable to minimization problems like protein folding. I’m still skeptical that this will actually be relevant, however.
Correct me if I’m wrong, but only certain types of mathematical problems are more easily resolved via quantum methods, right? Only problems that can be usefully resolved via quantum algorithms. AFAIK, only factoring large numbers and searching unstructured databases are better accomplished via quantum algorithms than by conventional means.
I strongly suspect that when they’re actually available, we will see a lot more applications for quantum computers. Right now, cryptography is a place where we know two very generic algorithms that will be able to be used to attack a bunch of stuff we care about in the future, and also where we have good reasons to worry about quantum computers that come into existence in the future–if NSA records all the public-key encrypted traffic today, they can crack it when they get their quantum computer running.
Quantum simulations, cryptography, that’s about it.
I have the suspicion that computational sciences that rely heavily on algorithms, such as genetic analysis, will also have the potential to advance drastically
I think 20 years is too optimistic. 30 years seems more realistic. Not that I’m a researcher in the field or anything, but I do keep up with the papers, and I don’t think we’re within 20 years of a quantum computer that revolutionizes any of the sciences.
That said…
Perhaps this is under ‘quantum simulation’, but I think the biggest impact might be designing and simulating novel materials which abuse quantum mechanics for fun and profit. By random trial and error we’ve only scratched the surface of what materials are physics legal.
Thought experiment:
You have been allowed the budget and legal authority to create a self-governing zone called “Heaven”.
-What are its distinguishing features?
-What is the governmental form?
-What are your entry requirements (immigration policy)?
-Will you charge a fee?
-How do you deal with unrest and crime (personally, it would be zero-tolerance cast out if time in “purgatory” in hearing the case results in conviction)?
-What does your budget pie look like (even if funds are unlimited, spending is not, and can be apportioned)?
“Heaven” is an artificial island like Dubai’s Palm Islands, populated solely by a fictitious religion of Heavenites. Their tithe goes toward a sovereign wealth fund, UBI for islanders and expansion of the island.
Distinguishing features: Life on the island is utopian – in that we have machines and cheap imported labour do all the hard or unpleasant work. Everyone has a ‘vocation’ that they spend time doing but it might be a sport, game or creative/artistic/scientific venture. Everyone gets a set income (100K, maybe less) from the wealth fund that they can use to bring in food or resources for their group/personal vocation and lifestyle (but no drugs). If you want more, you have to take it up with the dictator as to why you deserve special consideration.
Government form: Dictatorship with succession by appointment of the previous dictator. The dictator has wide-ranging power but hopefully shouldn’t abuse it. Pressure of community expectations should incentivise competence, transparency and thrift. Unlike in a state of a hundred million, the dictator personally knows the people he’s ruling, judging and administrating. Any corruption is stealing from discrete people, not ‘the treasury’. Besides, he can hardly get away with conspicuous consumption in a tiny island.
Entry requirements: Must be recommended by several other Heavenites and have a track record of virtue.
You’ve been paying 10% of your income to the church as condition of membership anyway, but if you get a ticket to the island you have to sell your house and car and so on and contribute it to the wealth fund.
There shouldn’t be any unrest and crime since we’ve cherry-picked wealthy and virtuous people and they should share very close communal bonds, being part of the same religion. Not more than 5,000 people should be on the island anyway. If there is serious crime, the offender should be exiled from the island and the religion. Should be a good deterrent, since they’ve sold all their worldly belongings as condition of entry.
Budget is hazy. If population is 4500 + 500 servants, then personnel might cost 450,000,000 + 10,000,000 in labour costs alone. There’s probably some hydroponics on the island, solar power and money might be made in intellectual property but this will not come close to breaking even. Maybe another 20,000,000 in maintenance, power, utilities and resources? I don’t know how much artificial islands cost. 500 million a year is my estimated budget, including security and expansion. That would require a sovereign wealth fund of 10-20 billion, depending upon rate of return. Apparently, the Mormons have 20-25 billion so that isn’t too unattainable.
Let me preface this by acknowledging that this isn’t really an answer to your question, but I figure that it’s close enough that you’d want to see it.
With a name like “Heaven” the first thing that comes to mind is a rich people enclave like the titular Elysium in the 2013 Matt Damon movie. In that situation I’m picturing a megacorp-run community where the overwhelming majority of residents are its employees. For bonus points, the company would be called something circuitous like “Exilis Interpersonal Management Coordination Services.”
I don’t know what would be the core business of the company, but it would also have subsidiaries that would offer all of the services in the community: security, maintenance, sanitation, recreation and so on. The employees of the main corporation would be offered simple sleeping quarters and basic amenities for free in the residential area, eating in cafeterias and having access to gyms, gardens, and some entertainment resources. Those who chose to do so could pay out of pocket to move to better quarters, all the way up to small penthouses and houses with tiny (mostly symbolic) lawns –the epitome of luxury. The employees of the subsidiaries also receive free housing and amenities, but their quarters are of lower quality and further to the edge of the complex. Lastly there are the non-employees which enter the complex to sell goods in the market areas (assuming they have permits) or who work for companies that rent store space inside the complex. This last category lives outside the complex and procure their own housing and necessities.
Security is enforced by the security department, who carry out their duties professionally and sometimes very enthusiastically (mostly when dealing with non-employees). All public areas are watched by CCTV to ensure the safety of the employees and their compliance to the rules and guidelines. These rules and guidelines cover everything from inter-employee aggression and defacement of corporate property to dress and grooming standards. Fortunately, adequate and guideline-compliant clothing is provided at a low cost by stores in the shopping areas. Failure to comply with rules and guidelines incurs penalties that officially vary from verbal warnings to termination of employment. This isn’t an euphemism for murder, it merely means the loss of one’s job as well as their living quarters, access to all of the services in the community, and likely contact with their friends and loved ones.
Admittance into this community is the same as the hiring process in any major company. The HR department ensures that vacancies are quickly filled with persons of the necessary skills and moral flexibility. Corporate culture is welcoming and tight-knit, but becomes more and more cut-throat the higher ones goes up the ladder. The work space is very hierarchical, but the whole complex is technically work space, so you can imagine how that works. The HR department mediates interactions between employees, always making sure that their decisions are fair and what is best for the company. Employees occasionally leave the complex on their down time and venture into the non-corporate lands beyond, where they can forget their strictly regimented lives and engage in drug use, prostitution and other actions against the rules and guidelines. The legal department shields the employees from possible consequences of their out-complex actions that might try to find them, but if the severity of the backlash or its frequency becomes a liability, the employee will likely face termination. Any rumors about high management hunting down for sport former employees in the out-complex slums is pure hearsay.
The budget is drawn up by the highly skilled and exceptionally miserly finances department, which aims to provide employees with the most productivity-enhancing lifestyle they can according to guidelines from above. This means ensuring that everyone has access to adequate nutrition, basic healthcare, regular sleep, hygiene, and enriching environments that reduce stereotypy and self-mutilation. A “friendly” competitive culture between sectors and departments to achieve the lowest possible costs ensures maximum economic efficiency.
There actually quite a number of places called Heaven or Shangri-la or Xanadu, etc.
Besides small towns, a lot of them are tourist traps.
Generally beautiful, scenic tourist traps, but tourist traps nonetheless.
So I could think of it as some kind of “experience” tourism. Maybe it’ll be like Disneyland but the various interpretations of heaven in the religions and mythologies instead of jungle place, western place, future place. All fake, capitalized, and somewhat hedonistic, but still fun and entertaining.
Not a place I would care about organizing or governing though.
Congratulations! You have just been appointed CEO of a large health insurance corporation, ExampleCare. You were selected due to your bold new ideas for improving member experiences and health outcomes while keeping our costs about where they are now – the Board will accept increased spending temporarily, but it has to pay off in similarly decreased costs down the line, say within five years. You have complete authority to change ExampleCare’s internal processes, covered services, public health and socioeconomic initiatives, etc. within the bounds of law. We offer a variety of insurance plans, ranging from ACA marketplace plans to Medicare Advantage options and state Medicaid subcontracts.
It’s the evening after your appointment; the champagne has been poured, the cigars have been lit, and the Board is waiting for your speech. What new and exciting changes will ExampleCare be making?
You know that thing TV shows do where the villain of season 1 becomes a hero in season 2?
I’m selling ExampleCare to Private Equity. The way to improve health outcomes and reduce costs is to burn the whole parasitical health insurance industry to the ground.
I’m working out how to partner with a discount airline and several foreign clinics to outsource as much of our patients’ optional surgeries and pharmaceutical purchases outside the US. We’ll cover that knee replacement at 70% here in the US, or take a flight to Mexico where our partnered orthopedic clinic there will do it for a lot less, and we’ll cover it at 100%. Members get cheap flights for short trips to various foreign destinations (Canada, Mexico, various Caribbean countries) as a benefit, and while we can’t formally tell you to buy your drugs there, your drug benefit covers 100% of costs there but only 50% in the US. Oh, and there’s an entirely informal website that lets you get someone else to pick your drugs up from Canada/Mexico/wherever for a small fee. We also are introducing an advanced telemedicine system where you go to a local clinic and are seen by a nurse/medical assistant, and then consult with a doctor in some lower-cost place over Skype. Add in some kind of on-call doctor in case you need to be seen hands-on by a doctor.
Basically, the US medical system is in an unfixable cost spiral and has bound-in costs that can’t be lowered much, so let’s just do a damned end-run around it.
Outsourcing health care to other countries would plausibly work, but it doesn’t meet the original poster’s requirement that you plan be within the bounds of current law, which has lots of requirements about local provider network adequacy and maximum out-of-pocket expenses.
Would that actually be legal? It’s one thing to offer equivalent coverage when travelling abroad. More interestingly, if US health insurance *doesn’t* cover repatriation, why don’t they jump at the opportunity to cover medical care everywhere else? My current health insurance only covers me abroad if travelling on company business which I don’t understand.
I was wondering the same thing. I wouldn’t be surprised if there were something in the legislation concerning medical insurance that required services paid for to be provided by formally qualified (US) health care professionals. That would torpedo a plan to source medical services across the border.
I’m not sure that it can be done, particularly with an established organization, full of people who don’t want to change. But even without that, many of my worst interactions with health insurers have clearly been the result of money saving attempts on their part. (The harder it is to get an appeal considered, and the more often perfectly valid claims are randomly denied, the better for their bottom line….)
I think an insurance company that could eliminate surprise bills and force a pre-agreed price to the customer and a single bill at the end of the treatment for all medical services would be a huge win in terms of user experience. Medical billing is optimized for fraud.
Been tried – HMOs. People hated them when they were popular.
That’s a paradoxical way of phrasing it. But yeah, my impression at the time was HMOs were loathed.
Good catch – “common” would have been better. Or “popular with employers”.
Why were they hated?
@eyeballfrog says: “Why were they hated?”
Read this short piece from 1998: https://www.washingtonpost.com/wp-srv/business/longterm/ethics/hollywood.htm
I know a bunch of people who have Kaiser and are extremely happy with it;
HMOs delivered, at worst, care that was within statistical noise of other offerings. And at cheaper cost.
People hated the experience, though.
This is why I am always nervous about any new health care system proposals. I think there are lots of ways to make the system better along the obvious metrics, like outcomes and cost. But people hate being told there is some care they could get but The Man won’t pay for it, whoever The Man is. So we will get halfway through implementing the new plan, people will realize that they hate some significant unpopular part of it, and then the politicians will take out that significant unpopular part — except that the unpopular part wasn’t in there for shits and giggles; it was actually necessary. And then we staple another kludge onto the kludgocracy.
I think the issue with something like Kaiser is that it works great, as long as you are generally healthy. I have Kaiser and love it because it is cheap and efficient. However, I also know that if I ever have an uncommon health issue that I will be mostly screwed, and would have to pay a ton out of pocket to get to see a real specialist. Since I’m generally healthy and Kaiser is excellent on common issues, I feel like it’s worth the risk. As I get older though, I will probably look for something else,
We immediately partner with several teaching hospitals in our coverage area to fund double the amount of residency slots. With the new residency slots, we bake ongoing contracts in to the residency agreements that they will continue to work for us after residency for a certain period of time for below-market rates. Then, we offer care to our policyholders at a discounted rate at the facilities which these discounted doctors will work. Since these are marginal doctors, who would not have qualified for a residency program without our new slots, they may not be the best, but will should be willing to work for less than current doctors. Also, increasing the supply of doctors will create downward pressure on doctor compensation across the board.
To address the problem at the other end, we will begin offering end-of-life payouts to any customer that contracts a potentially fatal condition in lieu of treatment. Payments will begin at 25% of expected expenditure on the patient, and be negotiable up to 75% of expected expenditure. For example, if a patient contracts a cancer that we expect will cost $1 million to treat, they will be offered a $250-$750k lump sum payment to not have the cancer treatments be billable to our insurance. This will be marketed as power for the customer to choose their own terms for end of life, showing lots of elderly people taking one last elaborate vacation before they pass, instead of spending their last days in a hospital.
The press is going to have a field day: indentured servitude, death bribes, the eternal enmity of the AMA…
That’s why we need to hire the best marketing team out there. It’s not indentured service, it’s investing millions in the next generation of doctors. It’s not death bribes, it’s allowing our customers to live life to the fullest while they can. With good enough spin the AMA will be thanking us.
Well good luck! I actually like those ideas but I’m expecting riots 🙂
This is a brilliant idea that will proceed to have hilarious unintended consequences when some clever patient realises that since insurance cannot deny coverage for pre-existing conditions, they should take the lump sum, and immediately switch to a different insurer who will be obligated to treat them. I expect the knock-on effects of this will culminate in Congress making this practice illegal if the courts don’t do it first. Also expect some very expensive lawsuits from the relatives of people who took the lump sum option then months later realise they really didn’t want to die and desperately started seeking treatment at a point where it was much too late. There’s going to be lots of sob stories along those lines blasted all over the media and court rooms.
We can use this to our advantage (until congress outlaws it). Make the payout tied to a loyalty metric, like 2% of expected cost per year you have been a customer. Charge customers an additional premium, since they know they may have the option to take the buyout, and reap increased income from premiums until we actually do have to pay out, and then we pay out at a fraction of what our true cost would be, and our competitors take a hit as well (at least until the competition realizes what we are doing and copies it….maybe we can get a business methods patent on the idea and be the only ones to benefit from it until it is eventually outlawed)
I’d pour all my resources into finding a way to distill and bottle “judgment based medicine” and not “rules-based medicine” and sell it. And then I’d branch out to teaching and hosts of other areas where some unknown force is trying to destroy all human judgment in favor of rules, rules, rules.
Echoing Gobbobobble, doing what I can to burn it to the ground and start from scratch, or lobbying super-hard to truly liberalize this market – i.e. not Center for MEdicare and Medicaid Services making a tiny pilot of “innovation under these 20 conditions”. The rules are so many, so conflicting, so contradictory, so cumbersome; calling it a “market” is laughable; the overhead is so big it seems impossible to quantify. The field is stacked for Example-corp to become evil or fail, or remain niche and small – however noble its intentions.
Or
We’re going to invest in lobbying via supporting our own set of politicians which will ape the Party lines, but with the addition of being in our pockets. Apparently that sort of spending is very cost effective and that’s after all the group which will most determine our profits because of the current level of State and Federal health insurance regulation.
Alternately, we’ll go ahead and lead an industry-wide lobbying cartel, but the preference is, if we can afford it, for them to be in our company’s pocket specifically.
The blackface discussion below raises a kind of interesting meta-issue that seems appropriate for a CW-permitted thread: When some action or symbol or word choice is decried as offensive, the speaker doesn’t just mean “this offends me” or “this offends many people,” he generally means “you should also be offended on behalf of the people offended by this.” So if most blacks find blackface offensive, but most whites don’t really care, then the argument is that whites should take offense on behalf of blacks.
Deciding whose offense should be taken up by others, whose should be ignored, and whose should be silenced or attacked as itself offensive is where the action is, here. There often seems to be a notion that we’re obliged to take offense on behalf of some groups or individuals. Some examples:
a. PZ Myers’ treatment of consecrated host was quite offensive to a lot of Catholics–should everyone have joined in on our offense? Were non-Catholics obliged to join in on our offense?
b. There have been occasional jackass publicity-seekers who announced their intention to, say, tear up a Koran on TV or something, offending the hell out of a lot of Muslims. Are non-Muslims obliged to join in on their offense?
c. Many people with traditional values are offended by open homosexuality. Lots of people are offended or at least made very uncomfortable by trans people. Are the rest of us obliged to take offense on their behalf?
d. Various cartoonists have drawn pictures of Mohammed, which offends some Muslims. Are we obliged to take offense on their behalf?
e. Some people find some previously-widely-used terms offensive, like “Oriental,” “colored,” “blind,” “retarded,” etc. Are we obliged to take offense on their behalf, when those words are used?
f. Some people are offended when aspects of their cultural heritage are used by others (or used casually by others). Should the rest of us join in on their offense?
g. Some people find the names of sports teams (Redskins, Indians, etc.) offensive and demand they change. Are the rest of us obliged to also find those things offensive.
h. Some people find interracial couples to be really offensive and upsetting. Should the rest of us join in and demand that interracial couples not offend those folks with their presence/visibility?
i. Some people find disrespect toward the flag very offensive. Are the rest of us obliged to go along?
and so on.
A huge number of public outrage-fests and think-pieces seem to follow this pattern. The argument is over whose offense matters and whose doesn’t. The argument seems to be that we all are obliged to adopt a kind of transitive offense from some groups, but definitely not others. The people who are offended by being called “Oriental” instead of “Asian” deserve our support; the ones offended by gay or interracial couples appearing in public deserve our scorn. Either kneeling during the National Anthem or standing during it is offensive, and we just need to decide which one.
I’m not convinced that transitively taking offense makes the world a better place very often. It’s mixed in with the normal social expectation of politeness, though–if you’re going around offending people all the time intentionally, I’m probably going to think you’re an asshole and treat you accordingly. OTOH, there are definitely people who take offense as a strategy, and others who are very sensitive about issues I don’t think are really all that important. And taking offense in public is also a way of getting a lot of attention, which is why we get a lot of it at present.
I have a lot of thoughts on this but I’m not really sure how to put them down coherently. While I mull it over, I’ll ask this: Is “blind” actually considered offensive now?
I’ve seen people who seem to think it is, but most people don’t. It’s probably more like “black” vs “African-American,” where there was a push to change the acceptable term but it never really got much traction.
Huh, I was told all growing up not to use “black”. Has that changed? Have the PC folks given up on that one?
My memory only goes back to the 90s, but “black people” is entirely inoffensive and the default term outside of professional-speak. “African-American” probably got outvoted for being too verbose.
To me “black” implies skin color (with some caveats, Beyonce is ‘black’ despite her shade), while ‘African-American’ implies someone ‘black’ who’s ancestors have been on this continent for generations, so both Barack and Michelle Obama are black, but only Michelle is African-American (though arguably he is by marriage).
Ironically, I use the terms the opposite way. Idris Elba is black, but he’s not African-American. Elon Musk is African-American, but he’s not black. Barack Obama is both African-American and black, Michelle Obama is black but not African-American.
@mdet,
I get that, it’s putting the “African” as where the family recently migrated from, but when the specific nation on the continent is known (Kenya) and recent then “Kenyan-American” would be used instead (though actually that would be his Dad, with Barack being a “mixed Kenyan and Kansan” with “African-American” being used for those who’s immigrant ancestors are very far back.
This also goes for “European-Americans”, someone who has both parents born in the same other nation (i.e. Ireland) would be “Irish-American, but add an Italian parent as well (a common mix) and then the general continent is used
I do realize that “Asian-American” is commonly used even when the specific nation ancestry is known, but what you gonna do?
Really I wanted a way to distinguish say Kamala Harris (Indian and Jamaican-American) from say Corey Booker (but even that doesn’t quite fit, most ‘African-Americans’ have some European ancestors as well, and it seems most who have American great-great-grandparents verbally claim some American Indian ancestry as well, whether black or white.
I know at work I’m referred to as “one of the Irish guys” though my ancestors are decidedly more mixed.
It’s a demonstration of power.
PZ Myers can be extremely offensive towards christians because his side has the cultural power and christians are not only “not protected” by the left, they are actively oppressed. He would never do that towards muslims, which are a protected class of the left. The ones that attack christians and muslims (e.g., Sam Harris, Richard Dawkins, Christopher Hitchens) get rebuked strongly.
Meanwhile saying stuff like “men arent women” can get you banned for life from twitter. There is no other logic for what is allowable or not in public life. Offend the left, get destroyed, offend the right, get promoted.
By enforcing leftwing taboos and laughing at rightwing taboos, the left is just demonstrating what it can do.
Yeah, that sums it up perfectly.
That’s not true, though. The infamous communion host incident also included desecrating some pages from the Koran.
Just for the record, I’d have considered him an asshole for the Koran thing even if he’d never done anything with consecrated hosts. I’m not offended by it, exactly, I just think doing it with the goal of getting attention and offending people marks you out as an asshole.
To differentiate between two offensive incidents, I think there’s a big difference between Koran burning and Mohammed drawing.
In our culture, we use cartoons as a way of communicating pithy points. Yes, those points can be offensive, but the act itself isn’t. Our country itself, it’s founders, other religious leaders, present day leaders, they’re all fair game.
On the other hand, we use book burning as a way of declaring anathema.
It’s a difference of saying “There’s a problem with X” versus “X has no place here.”
Muslims who are offended by the latter are trying to get equality (on this point, at least). But Muslims who are offended by the former are trying to get special treatment.
There’s a case to be made for “respect each group with the form that is meaningful to them” but it’s definitely more of a demand than “don’t revile us in particular.”
@albatross11 Oh, I agree. I don’t like any of the “new atheists,” but IMO Myers is easily the worst of a bad lot, and the stunt in question is a good illustration of why. It had no intellectual point, nor was there anything interesting about it. Its sole purpose was to hurt religious believers and laugh at them for being hurt.
@Randy M
This reminds me of something I don’t think I’ve articulated before, which is that some such requests seem to appeal basically to respect, but aren’t actually consistent with respect. Like, there’s such a thing as courtesy, where many non-Catholics will still address a priest as Father, or total strangers will call PhDs Doctor, and it’s basically a matter of politeness and not anything meant, and not something anyone can be made to do, either. But I think we more commonly think of respect as something that should be sincerely felt and given.
So for instance I run into problems processing requests for me to “just use the damn pronouns,” as a matter of common decency or respect no less, because it seems like I’m being asked to pretend. To put on a show of affirming a chosen gender when I actually don’t. I certainly haven’t had the experience, so maybe I’m misunderstanding it, but from my armchair that sounds more offensive.
ETA: wording, a bit
If you want self-esteem, you want to be given heartfelt respect.
If you want to demonstrate power, you want even people who dislike you to feel the need to acquiesce.
Pronouns are a mix, I think, or varies by individual.
In the case of titles, using them is also a demonstrating respect for the organization that bestows them. Someone who feels the Catholic church has zero authority may chaffe at using the term ‘Father’ in the same way they would a doctor with a mail-order diploma or a general appointed by a tin-pot dictator.
I don’t see one (assuming you mean drawing Mohammad as deliberate provocation rather than without realising that it’s offensive). Is burning a Bible significantly different to desecrating a host? That’s the same thing.
I did try and explain the difference, but I’ll give it another go. A cartoon is something we use to communicate an argument. The medium is not offensive in our culture, even if the message can be. (some) Muslims reject any right of even non-Muslims
to draw Muhammad in any way because in their culture they see artistic depictions of the sacred as blasphemous. We do not. The offense was due to the foreign cultural values the minority group was trying to impose as a one-sided norm.
When an artist uses a cartoon to make a point, he is not doing something we view as fundamentally offensive. This doesn’t hold for book burning.
There have been times when drawing Muhammad was done as a way of pushing back against the encroaching norms against speech that minority groups claim offense at. This is somewhere in the middle, it has a clear message and argument to it beyond just the hurt feelings–establishing a universal norm.
I don’t know exactly the form Meyer’s desecration took. It might fall under one category or another. Edit: Looking this up, I found a report that he threw it in the garbage and took a picture. This seems like a deliberate gesture of contempt rather than making an argument in pictoral form; much closer to book burning than drawing a cartoon is, and also not a commonly done practice is a variety of other contexts.
The binary you’ve presented begs the question. There are spaces, e.g., art, where one may wish to draw Mohammed that is both not a deliberate provocation and while realizing that some find it offensive.
There really are no spaces where one would burn a Koran other than for deliberate provocation and offense.
@Randy M
I think you’re isolating the act of cartoon drawing from the motive but not doing the same for book burning. If you ignore motive, there’s nothing wrong with drawing any cartoon, but equally it’s fine to burn books if say you would otherwise freeze to death. If you’re going to say that book burning or host desecration are different because they’re contemptuous, then you need to also defend the motives of cartoonists. I think that it is reasonable to argue that say the Jyllands-Posten cartoons were less “objectively offensive” than in some sense than someone burning a Koran or Bible, but that’s entirely based on the intent and context, not the act itself. Burning a book as protest against severe religious censorship would be far less “objectively offensive” than either (indeed I would view that as morally commendable).
I dont think offense can be objectively determined. The cartoons episode just demonstrated an incompatibility between western culture and Islam. We dont have the notion that drawing something is offensive, while in Islam, drawing Mohammed or Allah is a big deal.
There is another level of offense that can occur if the drawing itself is disrespectful, but that is not what Islam forbids, Islam forbids any kind of drawing.
In this case anyways, when liberal western culture ran up against Islamic teachings, liberal western culture folded like a cheap suit.
Well of course it’s based on context. In the context of the cultures in which the controversy took place, book burning is always an offensive act, and cartooning (in a publication) is always a valid form of expression (and the message itself may be an offensive one, but that wasn’t what the dispute hinged on). Whereas in the Islamic culture, any depiction of their sacred objects is offensive. That’s fine for them, but we don’t have an obligation to live by their rules. I consider it superogatory to honor others by their own customs.
Now, is there some situation where throwing an object in the garbage is a valid form of expression? It’s less of a defined symbolic act, but the medium seems to me significantly closer to book burning that cartoon drawing.
In western society, offensive cartoons are just a part of life. See: 90% of political cartoons.
When a cartoonist draws something heaping abuse on their outgroup, people who respond with death threats are treated as the problem. But apparently when someone draws Mohammed, even in an innocuous context, the fargroup are totally in the right to issue threats and really the cartoonist should have known better.
The principle seems to be that everyone should be tolerant and practice liberal values, except Muslims. They have a right to be homicidally offended, though Leftists never make it clear why.
We can infer that they consider Muslims a race, so this is “punching up” against racism, but I’ve never seen anyone murdered for making caricatures of black people. So the exception is unique and inscrutable.
Because we don’t tend to believe it. The ones that do believe it are generally happy enough to inform you of their crazy beliefs and you’re free to write them off as fringe nuts. What you don’t get to do is imply, constantly, every chance that you have, in every open thread it comes up, that this is anything more than a fringe element of the left, and I still want you to stop doing so.
mitv150:
So it seems like you’re making the distinction between:
a. Doing stuff that’s intended as offensive in our culture and taken as offensive in their culture.
b. Doing stuff that’s normal in our culture and taken as offensive in their culture.
With (b), you might do it without trying to be offensive, just as part of your normal day-to-day life. With (a), it can only be done as an intentional provocation.
Am I understanding your point?
The principle seems to be that everyone should be tolerant and practice liberal values, except Muslims. They have a right to be homicidally offended, though Leftists never make it clear why.
I don’t think many people justify something like the attack on Charlie Hebdo. Many people think the cartoonists were bad people for the way they behaved toward Islam, but outside of fundamentalist Muslim circles, I don’t think you’re going to find many people saying shooting the cartoonists was justified.
@albatross11
Yes, you have it accurate. (note that my reply was to thisheavenlyconjugation, not directly to your comment)
@albatross11 mitv50
Such a distinction assumes that all that matters is whether you know that something is offensive is to someone. However, if people have to refrain from anything that is offensive to someone or to a group, then almost nothing is allowed anymore.
I reject the idea that if you know that people take offense, you are obliged to change. At the very least, there should be a cost vs benefit decision, although in a somewhat liberal society, you simply have the right to do everything that is not banned by law.
@Randy M
That’s not the relevant analogy. My point is that
And therefore if you are being consistent you should also conclude
in this case.
@Gobbobobble
Is that directed at me, and therefore either wilful or incredibly stupid mischaracterisation of my beliefs? Or are you injecting irrelevant low-quality snark at some non-present and quite possibly imaginary outgroup? Either way, it’s bad and you should feel bad.
That’s not a direct quote, but rephrased like that, yes, this is a less offensive act in American culture than burning a book.
Because of other factors discussed (intent, message, etc) I’d put Meyer’s action on par with “draw an offensive cartoon of Mohammed to piss off censorious muslims” and well above “treat Mohammad like you would any other person and use his image to make a point.” Which is about on par with the cleaning lady accidentally throwing out the wafer left in the wrong place.
Myers’ act was not simply about disposing of certain crackers. He acquired consecrated host to perform it, which requires passing oneself off as a Catholic and making off with the host in secret. This is more akin to stealing a Bible from a church to be burned than simply buying one, and it is phenomenally disrespectful.
It was in response to the general line of “cartoons aren’t offensive in the West”, actually, but if you wanna sanctimoniously clutch pearls, go right ahead.
Aapje:
If you are going out of your way to piss someone off, I’m generally going to think you’re kind-of an asshole. If you’re going about your normal business and piss someone off inadvertently, I’m not generally going to think you’re an asshole.
If some non-Catholic goes up and takes communion out of ignorance, or even out of just wanting to blend in and not caring much about our rules, I’m not going to think they’re a bad person. Even if they walk out with the consecrated host in their hand without intending to do anything offensive, I think they’re just people who inadvertently did something offensive to the beliefs of Catholics. But if they plan to take the consecrated host and publish pictures of tossing it in the trash or stomping on it or something, going out of their way to offend people, I’m going to think poorly of them. There can be reasons why you need to do something that offends someone else (think True, Necessary, Kind), but if your goal is “this’ll really piss those bastards off,” you’re probably just being a jerk.
The ‘asshole’ thing seems to be a superweapon based on appeal to politeness rather than formal rules. Problem is, since it fails to apply any judgement to the demands being made, just the refusal to accede to them, its effect is to ratify unreasonable demands.
A Catholic priest expecting you not to take a consecrated host from his hand, conceal it, and later defile it is one thing. An Imam demanding that no one draw Muhammed is quite another, and I’d argue if you’re going to apply some sort of “asshole” standard, those making demands that no one draw Muhammed are the assholes.
@DeWitt:
What is the mainstream leftist (for values of “left” that extend as far as Theresa May) position on violent crimes (crimes under Western law, that is) inspired by Islam?
When they disagree with another belief system, like Christianity or racism, they seem very concerned about stopping it well short of the feared actions, not allowing the number of people with odious beliefs to increase, etc. So if orthodox Islam inspires certain violent acts, what is the Left proposal to stop it? What’s your equivalent of an immigration ban?
(I’d be thrilled if the answer was “an immigration ban”, because then it would be consensus and I wouldn’t have to support Trump.)
Thank you for asking this rather than making assumptions. I very genuinely appreciate that.
The mainstream leftist opinion is that crimes inspired by Islam are dangerous and need to be halted. You see this in no leftist parties being in favor of putting Islam above the law, none of them asking for muslims to be pardoned, or even just giving them lighter sentences. The US hasn’t had a large attack in a while, so, enjoy.
https://edition.cnn.com/2015/11/16/world/paris-attacks/index.html
A country sees more than a hundred people get shot, and the president of France, someone from the literal Socialist Party, decides this means war.
https://exmuslims.org/
This is an organization of ex-muslims, mostly homosexuals and atheists.
They say they have no political home– the right isn’t fond of homosexuals and atheists and the left is unwilling to acknowledge how dangerous mainstream Islam is to ex-Muslims.
For what it’s worth, I mostly hang out in leftish circles, and it took me some work to internalize just how much Muslims and ex-Muslims are at risk from other Muslims.
@Gobbobobble
What do you mean?
This
is evidently snarking at someone (unless you mean to say you are genuinely expressing that opinion). Who?
I don’t recall anyone on the right suggesting that James Alex Fields should be pardoned, either, so I guess we’re all one big happy family here.
But what’s the left’s position on how Islam should be treated for its role in inspiring crime? If crimes inspired by Islam are dangerous and need to be halted, should the left perhaps be looking to halt them at the source?
Because that’s something the left seems to favor in other circumstances. James Alex Fields, therefore the far right is a Nazi menace that needs to be stopped from inspiring dangerous crimes; it should be opposed and ridiculed and punched and driven from polite society wherever lefitists hold sway. Some anti-abortion murders that mostly petered out a decade ago, therefore Fundamentalist Christians are a menace that needs to be stopped, etc. Elliot Rodger, therefore Incels are the scum of the Earth and need to be thoroughly marginalized less they inspire any more crimes. Etc, etc, etc.
As Nancy Lebovitz notes, Islam inspires its followers to go out and kill anyone who decides not to be a Muslim any more, and they’re not at all secretive about that. Therefore, Islam is…
Help me out here.
You’re Americans. Signaling your support of Islam is cheap in countries where muslims are a statistical footnote. Half of them are your own people turned convert, they’re not particularly orthodox, and blend in very easily due to these things. Noticing that they’re not treated as harshly by the left as people arguing for abortion or the murder of all women is something Scott picked up on five years ago with I Can Tolerate Anything But The Outgroup: muslims are a fargroup for American leftists. Extremely few of them know any muslims, fewer still know any conservative muslims, and they don’t have a visceral reason to care. Signaling allegiance to the ingroup is much more important to them than worrying about a bunch of native-born blacks and whites who converted to a trendy religion.
In here, the response to the question is that we haven’t gone so full-on partisan that nobody can get along anymore. Leftists that govern do actually have to deal with muslims that aren’t basically regular people who stopped eating pork and so going along with whatever they say isn’t at all viable, so I feel comfortable to say that you’re noticing a particularly American problem here.
It seems to me that “outgroup homogeneity” is really confusing this discussion.
Islam is not one thing, any more than Christianity is.
@Le Maistre Chat @John Schilling
The mainstream leftist view is approximately:
Islamic terrorists : Islam :: Westboro Baptist Church : Christianity
(This isn’t to say the magnitude of wrongdoing is remotely comparable, but the attitude that “these terrible, nonrepresentative Muslims/Christians have twisted the core, peaceful teachings of their religion and do not represent the vast majority of nonviolent/non-asshole Muslims/Christians” is the same.)
OK, but the mainstream leftist (as opposed to merely liberal) view seems to be that organizations like the Westboro Baptist Church prove that Christianity as a whole needs to be kept down lest it return to its bad old ways across the board, that Christians should be presumed to be bible-thumping joy-hating gay-lynching bigots until proven otherwise, and that anyone who isn’t that will have the decency to not call themselves “Christian” in public.
Stop it, John.
Here’s six.
“You built a factory out there, good for you. But I want to be clear. You moved your goods to market on the roads that the rest of us paid for. You hired workers that the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for.”
“Now look, you built a factory and it turned into something terrific or a great idea, God Bless, keep a big hunk of it. But part of the underlying social contract is you take a hunk of that and paid forward for the next kid who comes along.”
“I hear all this, you know, ‘Well, this is class warfare, this is whatever.’ No. There is nobody in this country who got rich on his own – nobody.”
“Other countries around the world make employees and retirees first in the priority. For example, in Mexico, the bankruptcy laws say if a company wants to go bankrupt… obligations to employees and retirees will have a first priority. That has an effect on every negotiation that takes place with every company in Mexico.”
“Every time the U.S. government makes a low-cost loan to someone, it’s investing in them.”
“To fix this problem [of stagnant wages] we need to end the harmful corporate obsession with maximizing shareholder returns at all costs, which has sucked trillions of dollars away from workers and necessary long-term investments.”
Christians are not oppressed by the left. They are allowed to congregate in places of worship unmolested. They are not fired for being Christian except when Christians insist their religion interferes with the course of their job, which happens to people of every religion (a Muslim taxi driver who refuses to drive drunks around because alcohol is forbidden is soon going to be a Muslim without a job). They are not attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door. Christians are allowed to preach peaceably in public places, including state-funded places such as colleges – I know, because there was always a dude with a sign about how much God hated me at my college.
Look at Soviet Russia, modern day China, or modern day Iran for a picture of what religious persecution of Christians looks like. Hell, read the last few books of the Bible for a picture of what religious persecution looks like – or even Reformation Europe.
We don’t have a lot of religious oppression in the US, thanks to laws going all the way up to the constitution forbidding it. On the other hand, in mainstream media culture, things that offend Christians seem less upsetting to a lot of people who get a lot of airtime than things that offend some other group–Muslims, for example.
As best I can tell, this is pure “I can stand anything but the outgroup.” For the kind of people who become media elites, American Christians (especially fundamentalist Christians) are the outgroup, whereas Muslims are a faraway group that doesn’t register in local conflicts.
Keep in mind that nearly seven-tenths of the country is Christian. Even media elites can’t actually afford to do stuff that is outright offensive to every Christian – that would completely wreck their ratings. They can afford to piss off fundies, but that’s because there’s so many moderates that will just roll their eyes and move on.
Exactly. If Christians are currently being treated like this at 70%, imagine what will happen when they’re 60% or 50%.
@jermo sapiens
I imagine they will be allowed to congregate in places of worship unmolested. They will not be fired for being Christian except when they insist their religion interferes with the course of their job. They will not be attacked for their faith, nor forced to hide it, again excepting situations when anyone would be expected to leave their faith at the door. Christians will be allowed to preach peaceably in public places, including state-funded places such as colleges.
@TakatoGuil
You are literally copy and pasting part of your prior answer. Please don’t do that. But since you did, I’ll respond to a few more directly.
Beto was saying, what, last week that we should revoke tax exemption for churches that don’t support same sex marriage? How exactly will they congregate in places of worship unmolested when they are being taxed out of existence?
It matters a lot who decides what “the course of their job” is. Like deciding that hospitals are required to perform abortions or that doctors are required to make referrals for procedures they consider harmful.
That must be why Senators have been quizzing judicial nominees in their membership in the Knights of Columbus. Because membership in a private charitable organization is just part of leaving their faith at the door.
ETA: removed some snark
@TakatoGuil:
I suspect this has shaped your attitude towards Christians and I dont blame you. Hopefully you’ll come to see that most Christians see the “dude with the sign” as a complete moron.
@Nick
If Jermo wants to ignore my initial response to him in favor of “If the oppression I’m baselessly claiming exists now is as bad as I baselessly say it is, imagine how much worse this slippery slope fallacy could be later!”, then I will repeat my point to him as many times as it takes.
Anyway, Beto wants churches to be held to the same standards as other 501(c) organizations, which are not allowed to use their exempt money for political purposes. Churches do that today. It is not oppression to say that they should follow the same laws that apply to everyone.
As for doctors, part of the job is getting patients treated. If you refuse to do what the evidence-based consensus says treats the problem, and you refuse to refer them to someone who will in America’s shitty medical system where referrals are as necessary as they are, you are impeding treatment and yes, need to be removed. Personally, I’d rather we have a saner medical system so that referrals weren’t necessary, which would seem to resolve the issue.
The Knights of Columbus thing was wrong (I wasn’t surprised to see Harris involved when I googled it — I really don’t like her), but I don’t think it stopped the judge from getting the position and even the Washington Post called it bullshit so that’s left-leaning media stepping up for Christians, not oppressing them. Also note that it’s specifically anti-Catholic bigotry, not anti-Christian bigotry. The former was a historical problem and it’s not surprising to see that there’s still some vestiges left of it today. Hopefully journalists on both sides of the aisle will continue to decry it when it occurs, and judges will continue to be confirmed despite any bigoted attempts against them.
“There are five lights.”
(regardless of context, this tactic makes you A Dick)
@TakatoGuil
Some churches may be abusing their status, but the question Beto was answering was whether it should be revoked for opposing same-sex marriage. That’s not political purposes. Here’s the exchange:
The doctor’s job is deciding what is and isn’t treatment. Doctors are not the instruments of the patients’ wills. That was actually a case in Canada, anyway (where jermo is from), where referrals are not so necessary.
I appreciate that, FWIW, but I don’t think things are quite the way you put it. For one thing, anti-Catholic bigotry waned a long time before this, and the recent stuff is not coming from Protestant objections to Romish practices, so I think the recent waxing is something new. For another, it wasn’t just Harris, it was also Senator Hirono and, with the Amy Coney Barrett and People of Praise case, Senator Feinstein.
I would love to see it wane again, but I don’t think that’s the trend. And regardless, counterexamples are helpful when you make a blanket statement.
I’m not ignoring it. Others have answered your earlier point quite well, and I didnt feel the need to add anything.
And my point is that at 70%, Christians currently punch well below their weight in terms of cultural influence, and they are the designated outgroup for the elite. You are almost right when you say “media elites can’t actually afford to do stuff that is outright offensive to every Christian”, they get away with as much as they can considering the 70% figure. As that number goes down, they will be able to get away with more and everything indicates they will do more.
I invite you to visit a 70% Muslim country to see the difference.
@TakatoGuil
You are in fact stating the oppression that people are concerned with right now.
“just being forced to follow the law” is a problem if the law says that Christians can’t for example affirm basic doctrines of their faith without losing the protection of that law.
If it is a basic doctrine of my faith that homosexuality is sinful and that dressing as the opposite sex from your birth sex is sinful (both are true), then requiring me to violate that in order to have the protection of law IS oppression, just as much as it would be to deny a Muslim woman the protection of law unless she removes her hijab.
And saying “the law in its majestic equality makes both Christians and Muslims remove head coverings” is not changing the fact that such a law is oppressive.
My brother is a Dr in Toronto and he’s devoutly Christian (unlike me). If a patient asked to be euthanized he would have to give him a referral or be expelled from the College of Physicians. In which case, Canada would lose a specialist in internal medicine to the USA.
Whoah whoah whoah, hold up. I don’t like Harris either, but calling this out as anti-catholic bias is going to far. First up, disclaimer. I was born and raised Catholic and went to Catholic school. I’ve had to deal with many Knights of Columbus over the course of my life.
If you go to the list of questions asked to the judge about the Knight’s of Colombus, they all refer directly back to specific policy positions advanced and advocated by the Knights of Columbus.
If someone is coming before you and is a member of a group that advocates for some extreme positions (beyond those of the catholic church) then asking if someone who’s a member of that organization if their past affiliation with that group will influence their future judgement isn’t beyond the pale.
@Aftagley
(emphasis mine)
I don’t know what you mean. In the first set of questions, the KoC donated against legalizing same-sex marriage; that is a position of the Catholic Church. A KoC magazine article said contraceptive pills can have bad side effects on reproduction, mate selection, etc.; that’s not the position of the Catholic Church or KoC because it’s an entirely empirical matter*. In the second set of questions, the KoC leader said abortion is the killing of the innocent, and the position of the Catholic Church is that abortion is the killing of the innocent, and then the same-sex marriage question comes up again.
So no, none of these positions is over and beyond the teaching of the Church, nor are they or should they be out of the ordinary for Catholics to believe. So insofar as membership in this organization is taken to be disqualifying, then membership in the Catholic Church must be taken to be disqualifying. That’s a religious test for office.
ETA: emphasis in quote
ETA: *Okay, on second thought, I’m playing a little fast and loose here. The Catholic Church, and the KoC, take plenty of empirical stances: prayer works, God exists, etc. But even supposing, rightly I’m sure, that the magazine is run with an editorial policy in mind, this can hardly be construed as “the position of the KoC.” The KoC could start campaigning against contraceptive pills for their bad side effects, but to my knowledge they haven’t. One reason for suspecting they haven’t is that if they had, we would surely have heard about that instead of this article.
@jermo
Until Nick responded specifically to my repetition, no one actually had addressed my general point – Nick had a specific sort of quibble, but I explained what I meant by it and he agreed in a general sense. And for the record, I was a Christian at the time I was in college and I rolled my eye at him as much as anyone else. Fact remains, Christians are allowed to come onto government-funded institutions and profligate hate speech against me, so the idea that they are being oppressed, thrown to lions, etc, remains laughable to me.
However, comparing America to the 70% muslim Kazakhstan, I see that there is little freedom of the press there. Is that what you think is necessary to protect Christians from oppression?
@Echo
“You can’t be this kind of tax-free entity and produce political ads,” is what I meant, because it was what I erroneously thought Beto had said (I misremembered, and am embarrassed for having done so). It is very different than a no face coverings law because it is not specifically designed to be anti-Christian. There is nothing in the Christian faith that requires that churches be able to participate in the political arena, and the freedom of religion that is understood in America generally expects that they be separate entities.
@Nick
I was aware of the Hawaiian as well (I said “Harris involved”), but wasn’t aware of the second incident. Still though, apparently Aftagley has a better defense for the situation than I do. (EDIT: Or you’ve already noticed, but either way I’m bowing out of this whole thing because trying to debate with you, an enjoyable partner who I have enjoyed talking to this morning, is a lot less fun when I’m going to get called an Orwellian dick by random passersby, so I hope Aftagley is fun to talk to!) As stated above, I misremembered the Beto incident (I genuinely thought it was a 501(c) thing; those come up a lot), and can’t say I’m impressed with that kind of attitude. I’m a gay man and I don’t expect any church to be forced to marry me to my future husband. Frankly, I wouldn’t even want them to, but I’d much prefer a venue that likes its practitioners. As for the doctors thing, there are medical boards and guidelines doctors have to follow regardless of their inclinations for the exact reason that doctors aren’t always right. They don’t have to be instruments of their patients’ wills, but a person shouldn’t be rolling the dice to see if they’ll be able to get a treatment that’s available to the public or not. It’s their health, not the doctor’s. TBH, it feels to me like if Caesar says, “Send them to a different doctor,” that’s a pretty simple “submit to Earthly authorities,” deal that the Bible is pretty clear Christians are supposed to be doing.
The longer quote from Beto disqualifies him from being president, IMO. It sounds like something Trump would say, just from the other side.
@TakatoGuil
That’s not even quite the law. Churches are allowed, as are other 501c3 organizations, to make statements on ISSUES, but they are not allowed to endorse candidates. They may also endorse and fund ballot measures. https://www.irs.gov/newsroom/charities-churches-and-politics
They are also limited in what they can spend to lobby, but those limits are no different for churches than any other 501c3. So Beto is indeed asking for a specifically anti-Christian law, for certain Christian interpretations.
Given the corrected interpretation, do you agree this is a specifically anti-Christian law?
If you agree with the first, do you agree that Beto is in fact attempting to oppress Christians, whether successful or not?
Sorry to hear that, it’s been a good conversation. @Aftagley is a decent guy so I think it will continue to be a good one.
@Nick
I hope so, but I’m having trouble writing a defense of my previous statement. Usually when that happens, it means the position I’m trying to defend is weak, so I’m not sure how good a discussion partner I’ll be here. (Also I’m pretty slammed with work today).
First off – you’re correct. The KoC doesn’t stray from official catholic teachings. I’d argue they cherry-pick which aspects of catholic teachings they care about supporting politically, but their positions are defensibly catholic. In this aspect, they aren’t “extreme.” Re-reading my point, it looks like I’m alleging this, so I’ll retract that claim.
That being said, compared to polling of the catholic community in America, they are pretty extreme. KoC continues to denounce gay marriage, while a majority of US Catholics support it. Same with abortion – a majority of US Catholics are in favor while KoC continues to take very extreme measures against abortion (I don’t care what your beliefs are, intentionally deceptive Crisis Pregnancy Centers are ethically suspect and qualify as extreme). This keeps happening – average catholic opinion, at least in America, corresponds far closer to national averages on topics than it does to official church teachings.
So, that’s what I meant – the KoC is a conservative subgroup picked from a pluralistic larger community of Catholics. In the same way that disliking an exclusively left-handed group of avowed Marxists wouldn’t imply a larger distaste for the sinister folk among us, I don’t think that being leery of a Knight of Columbus implies any larger anti-Catholic bias.
On second thought, no, my argument above is bad. I can’t simultaneously claim that the Catholic church as a community is a pluralistic group that isn’t always bound by doctrine AND that the knight’s of Columbus are all totalitarian and only follow church doctrine.
Hmm, I’m not sure where to go from here. I still don’t think that her asking those questions indicates an anti-catholic bias, nor do I think they were unfair to ask, but I’m having trouble figuring out why.
Beto seems to argue that churches should be taxed when they engage in anti-gay marriage politics, but not when engaging in pro-gay marriage politics. Then the defining characteristic that he wants to tax is not them engaging in politics, but the contents of their politics. Since that political stance is a part of certain religions, that is political as well as religious persecution.
This is such a well-known fact that Americanism was declared a heresy in 1895.
(However, what was Americanism in 1895 is basically official Catholic doctrine since Vatican II. So I don’t know where this leaves us.)
I think some American thinking on this point tends to be torn between a sort of radical individualism and reality, and we often don’t have very coherent ways of thinking about how various forms of group membership should intersect.
From a hyper individualistic point of view, there’s nothing particularly special about religious belief over any other form of belief. If you’re supposed to wear a certain hat for a job, either everyone has to wear it (doesn’t matter if you’re Sikh; no one gets extra privileges as virtue of some other group membership) or no one has to wear it (we can all wear whatever hat or other head covering we like).
But there’s a desire for religious membership in some sense be able to allow otherwise disallowed behaviors or grant protection for certain opinions in a way that membership in other groups typically doesn’t. At least not explicitly. Because trying to stamp out the religious belief of various groups in the past has often turned out so incredibly badly.
If I was starting from scratch in utopia, I’d think about scrapping any and all religious carve outs, but only if I could widen the level of default liberty people get for most beliefs that aren’t in some way aggressive (as in, inciting physical violence or involving true threats or something).
But in the real world, religion gets some special carveouts and respect by virtue of its age and number of believers, and I’d probably prefer that to stick around on balance even though I think it’s not really defensible as an abstract principle. There aren’t many other types of organizations large enough and significant enough to people to counterbalance the power of government or to provide a source of comfort and support when government fails.
@quanta413
You could also reason the other way: the carve out should be for traditions in general, not just religious ones.
Not necessarily arguing for this, but it seems something to consider.
Aapje:
Punishing the expression of the wrong ideas via the tax code is pretty obviously going to violate the first amendment. Beto isn’t stupid, so he knows this and is saying stuff to that sounds good to his base but that he knows can never be done.
@albatross11
If promises that violate constitutional and human rights are a valid (Democratic) election strategy, then a lot of the (Democratic) criticisms of Trump seem rather hypocritical.
@Aftagley
You’ve already given up the argument for different reasons, but I want to push back on this, anyway. In England for many centuries Catholics were forbidden from holding public office. Of course, it wasn’t a matter of blood—some cradle Catholics pursued public office by leaving the Church and becoming Anglican.
Your argument seems to be saying that it’s okay to be suspicious of the KoC for nothing other than being orthodox Catholic in its political pursuits*. In other words, if the KoC or Buescher himself simply disavowed Catholic teaching, everything would be fine. That’s still a religious test for office—it’s just one requiring heresy rather than apostasy.
ETA: *Granted, your fake crisis pregnancy centers example speaks against this somewhat.
Not quite. My opinion (read: bias) on the KoC is that they are a politically motivated subgroup that just happens to be made up entirely of Catholics. I think that the tapestry of catholic beliefs doesn’t track naturally onto either political party and that the KoC cherry-picks which Catholic causes they’ll motivate around by which ones are both Catholic and Red Tribe (IE firmly pro-life, but only pay lip service to ending the death penalty).
This beliefs is heavily influenced by my history with the KoC and knowledge of some of the membership (somewhere between 60-100 people). This article is a more eloquent explanation of my trepidation around the Knights and their current leadership.
Regarding whether “Christians are now oppressed in the U.S.A.”, my guess is that it depends on relative to what, compared to Sudan?
Mostly not, and some new ‘oppression’ is from feeling a loss of dominance, from Pew Research in 2017 on: How Americans Feel About Different Religious Groups (including atheists) in which those polled were asked to rate how warm or cold they felt about other Americans in different sects (and atheists) from 0 degrees (very cold) to 100 degrees (very warm) and compared to a similar study in 2014 most groups received warmer ratings.
The ratings were:
Jews 67°
Catholics 66°
‘Mainline’ Protestants 65° (Episcopalians, Methodists, et cetera)
‘Evangelical’ Protestants 61° (Baptists, Pentecostals, et cetera)
Buddhists 60°
Hindus 58°
Mormons 54°
Atheists 50°
Muslims 48°
so most Americans are “warm” towards Christians, but are there places and subcultures in the U.S.A. that are “cold” towards Christians?
Yes.
From The New York Times: A Confession of Liberal Intolerance By Nicholas Kristof:
“…consider George Yancey, a sociologist who is black and evangelical.
“Outside of academia I faced more problems as a black,” he told me. “But inside academia I face more problems as a Christian, and it is not even close.”…
Where are these situations? Please give me a list of cases where my religion is not welcome so I know what to avoid.
I mean like, if you’re a therapist, you should be helping your patients without trying to convert them. If you’re a teacher, you should be teaching your students without trying to convert them. That kind of thing. I’m sure you’d agree that a depressed Christian shouldn’t be getting told by their atheist therapist that all of their problems are caused by “an irrational belief in sky fairy old man”?
@TakatoGuil
I don’t think this is a good example. There are lots of therapists that specifically advertise themselves as “Christian therapists” who bring their faith directly into the room to help their patients.
As long as they are honest, I don’t see a problem with a Muslim/Christian/Jewish/atheist therapist using their beliefs to help their patients, and it’s societally encouraged.
@TakatoGuil
I definitely agree with all that, but there’s the rather hairier cases of whether Christians can be teachers without being made to teach gender theory in school, or whether Christians can be doctors or run hospitals without being made to do things they believe are harmful, and other such cases. My job as a Christian isn’t to convert people in the workplace, but I’d like to be help people consistently with the corporal works of mercy. Faith without works is dead, etc.
@EchoChaos
There are also specifically faith-based teachers, but the generic example of either does illustrate the point, while the exceptions only further demonstrate that Christians are about as oppressed as people who like dogs.
@Nick
There’s definitely hairy cases, but that’s more a case of the world being a complicated place that makes it impossible to make everyone completely happy than it is of Christians being oppressed at this time.
@TakatoGuil
As a matter of preference, I would prefer all teachers to state outright their biases in that way rather than hide them.
If my therapist is an atheist who genuinely believes that irrational belief is causing my problems, I don’t want him to lie to me. I don’t see it being a better world when they are.
And I will note that your point to Nick is exactly the complaint that most Christians have when they say they’re oppressed. Whenever two people’s happiness is discussed in terms of two people who can’t both be happy because of a contradiction, the left between almost always and always comes down on the side of the non-Christian.
So let me try to give a generic secularist response to this, bearing in mind that the OP may not agree with it and I don’t always agree with every implication of it myself.
The relevant principles are:
(a) If you are making decisions about the operation of a public accommodation, you should have to make them for public reasons, in the philosophical sense of public reason.
(b) Religious beliefs do not count as public reasons. Neither do racial prejudices and some other kinds of prejudices. Note this is not the same as saying religious beliefs are morally equivalent to those prejudices, only that they equally must be excluded from public reasons.
So a good rule of thumb, on this view, is that if you are performing an institutional function where you would justly be prohibited from discriminating against black people, you are also justly prohibited from making decisions about that function according to your religious beliefs. This of course leaves a lot of unanswered questions, notably what should count as a public accommodation, but it sheds good light on most of the specific situations raised in this thread.
The pragmatic and historical justifications are similar too: the key claims include
(1) that decision-makers in public accommodations, even if privately operated, exercise broad-scale coercive power that must be checked to preserve the effective autonomy of others; and
(2) that there is no stable equilibrium where private institutions are free to discriminate or not, or to impose their religion or not, because historically private racist institutions, and likewise private theocratic institutions, had no qualms about using both state and private violence to get their way, so the only way to prevent that kind of intertwined oppression from taking hold again is to banish it entirely from the public square.
(2) in particular deserves emphasis because I think most religious people who feel persecuted by secularist restrictions today underestimate the degree to which those restrictions are motivated by horror at the theocratic restrictions of the past. Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again. The recent rise of “integralism” will only lend more credence to this view.
@salvorhardin says: “…I think most religious people who feel persecuted by secularist restrictions today underestimate the degree to which those restrictions are motivated by horror at the theocratic restrictions of the past. Christians are, on this view, not seen as powerless in any secure or long-term sense, but as recently defeated tyrants who have to be very carefully policed lest their tyranny spring up again…”
That’s an interesting insight as it’s puzzled me why conservative Christians in cities like (for example) San Francisco (where I work) aren’t more often treated with the blanket liberal tolerance that other faiths are (though there does seem to be more tolerance for heterodox views when the believer is also plausibly an ethnic minority), but when I think of it most of those in opposition to fundamentalists, etc. are migrants from areas where religious conservatism was dominat, usually including their family members, while those who grew up inside “blue bubbles” seldom care as much, if at all.
From my vantage point that the Baptist church doesn’t have a rainbow flag flying like the Methodist church up the street does just doesn’t seem like something to fight about, as for each their own (that Catholics do have an inside battle seems inevitable given their size and breadth of worshippers, and being the ‘universal’ church though).
Like much of ‘the culture war’ it seems to me that just declaring a truce and accepting that a monoculture for a nation this large and populous isn’t feasible anyway is the way to go (and the historic solution, they were some State churches in the early Republic, but a Federal church was forbidden with the Bill of Rights).
@salvorhardin
Yes, I agree. Liberals are treating Christians, the majority of the country, as a defeated foe that must be policed.
Christians notice this.
@EchoChaos
Not a surprise that traditionalist Christians notice it: but how many reflect upon the long, cruel centuries of theocracy that might cause secularists to so passionately regard that policing as necessary?
And the “traditionalist” qualifier is key here, as modernist Christians of the “More Light”/”Open and Affirming”/etc type are not the foe and not treated as such. This matters because while certainly Christians broadly speaking remain the majority, as you say, I would dispute that traditionalists are. The polls on social issues would look very different if that were true.
@salvorhardin
It doesn’t seem very coherent to argue that the religious must be severely restricted in their freedom because the religious used to severely restrict other people’s freedom. Also, when the religious restrictions seemed to be heavily based on social policing by a large religious majority, it is not obvious at all that it is necessary to police a religious minority in a secularizing society.
Also, it is extremely and literally uncharitable to equate religion with oppression and ignore the good things they did and do.
It’s perfectly coherent. If someone is going to get oppressed, then it’s preferable that it’s Them and not You. That reasoning doesn’t give you the moral high ground, but having the moral high ground isn’t much comfort when you’re being oppressed.
But Christians are not a minority in America. That’s a running complaint in this thread, that Christians should be treated better because they’re a majority. And if society is secularising, it’s doing it so slowly and listlessly that it’s very easy to imagine the direction turning at any moment.
@salvorhardin / Baeraad
This is a defensible position. It’s also NOT the one general liberals claim, which is that Christians aren’t oppressed and that their complaints that they are should be taken as nonsense.
I agree with the statement that we’re talking about traditional Christians, but saying “just change your beliefs and we won’t oppress you” hardly makes it better. Nor does the fact that such Christians are a minority.
I do think you have described the actual motivation well, although I think your view on how bad Christian oppression was is pretty historically inaccurate.
This thread changed from “Christians aren’t oppressed” to “yeah, but they deserve it!” in just a dozen comments.
@salvorhardin
The thing is that, for a variety of reasons, that fear is way off base today. I feel like I need to do an explainer on integralism. I’m not really the person for it, but I’m not sure there’s a better person for it on SSC, either.
It’s the synchronic, rather than diachronic, Law of Merited Impossibility.
@Nick @TakatoGuil
Re: doctors, I think the most equitable norm would be this: A Christian doctor who believes that abortion is murder should, if advising a patient with an unwanted pregnancy, clearly state that they believe abortion is wrong, and offer to refer the patient to a different doctor if the patient’s beliefs conflict with that. Ditto re: HRT for gender-dysphoric patients or whatever else you’re alluding to.
I feel like this avoids forcing doctors to advocate procedures they believe to be harmful, while not closing off options for the patient if the patient believes those procedures to be helpful.
@thevoiceofthevoid, I see where you’re coming from on the basis of social policy, but – as a pro-life Christian myself, though not a doctor – I would not be comfortable with that. You’re essentially telling people like me that, while we don’t need to personally perform (what we consider to be) murders, we need to make personal referrals to hitmen.
It’d be better to let health insurers, or even the government, maintain such a registry themselves.
@Evan Þ
You make a fair point. I’ll weaken my recommendation to the pro-life doctor saying, “I believe abortion is fundamentally wrong, if this gravely conflicts with your beliefs then you should find another doctor,” rather than actively making a referral.
This is a very real issue for my brother who is a doctor in Toronto, and who could lose his license if he refused to refer someone who was seeking to be euthanized.
On doctors specifically, one complicating factor is that the supply of medical practitioners is artificially restricted. It’s less defensible to refuse services on the ground that “they can go somewhere else” if you benefit from the use of state power to narrow the range of other places they can go.
On intolerance for traditionalist religion generally, I should clarify that I personally believe this has gone too far in a way that undermines liberal principles of tolerance; as obnoxious as Jack Phillips, Hobby Lobby, etc are they should be free to run their businesses according to their values, just as secularists should be free to boycott them according to ours. The point I am making is that if you have historically not practiced tolerance toward others, you probably won’t get a positive reception when you demand tolerance for yourself.
The Alliance Defending Freedom provides a good example here. A lot of the cases they take really are defending private religious people and organizations who just want to live out their beliefs in peace in their private lives. But when major figures in the organization work to defend the criminalization of homosexuality– mostly in other countries these days since they realize they’ve well and truly lost that fight here– their claim that they’re just trying to defend religious freedom rings hollow. And given the level of state and private violence that gay people were historically subjected to in the US before the defeat of traditionalism, and the level of violence they are still subjected to in those other countries, it’s understandable that an organization that elevates supporters of that violence would be condemned as a hate group, even though that overbroadly condemns those of its members who genuinely care about religious freedom and don’t want the state to impose their own religious beliefs on others.
Christians aren’t oppressed in the US in any extreme sense, but there are fairly common environments where they’re sniped at a lot.
I think putting up with insults is work.
Yes, but show me a person who doesn’t get sniped at for something? I’m not keen on situations where some people or groups of people get sniped at a lot more than others, without being personally at fault – but I don’t see Christians as being in that situation in the US.
Put another way, I own a political tee-shirt with a slogan of “no special rights for Christians”. The set of places where I could wear it without problems is much more limited than e.g. the set of places where Christians can and do prominently and visibly self-identify (via jewelry, bumper stickers, etc.) And I live in California, in particular the SF Bay area.
And note that the sentiment I’m (not) expressing is not “no rights for Christians” but in effect “no rights for Christians that aren’t also available to atheists, Muslims, Jews, Bahai, Hindus, and members of random New Age sects”.
Note also that the problems I’d experience would (just) be lots of offended Christians, and a few people who dislike politics being brought into the workplace etc. That’s enough to stop me from wearing it to anything much except a political rally protesting anti-non-Christian political speech or action, because I don’t especially want to snipe at random people in my environment.
Also, finally, I bought the shirt while living in Colorado, during the decades where that was home base for some loud and well known Christian conservative movements. Having those people around for too many years makes it very easy for me to respond to “Christianity” emotionally as being all about using the power of the state to mind your neighbour’s business, in betwen committing nasty public acts of cruelty (e.g. picketing someone’s funeral to announce that the death was God’s punishment for sin.) I know not all Christians are like this – intellectually at least – but my gut insists those people are the majority, or the ones with real power.
Am I sniping bringing this up? Not any more than other posters on this blog (not this thread) who’ve opined that no one can be moral without religion. I.e. I’m not chosing the option of maximum kindness, but neither are they. And neither of us are sniping per se.
I suspect it’ll be a cold day in Hell before I get over my visceral reaction to in-your-face Christians, and anyone who mixes Christianity with politics. But I am capable of not letting it affect my behaviour to random small-C christians, at least 99.9% of the time.
OTOH, if you turn up at my door to proselytize, better hope I’m in a good mood, and manage a somewhat frosty “I’m not interested in discussing religion, thank you” while firmly closing my door.
Suppose I owned a political tee-shirt with a slogan of “no special rights for blacks”, or “no special rights for LGBTQ”. What do you think is the set of places I could wear it without problems?
And what do you consider to be “the set of places”? Is it strictly geographical – the total number of acres where I could wear such a shirt without problems – or might it also be in the virtual realm of online communication? (Suppose it’s not a shirt, but rather my email signature, or the tagline under my username or portrait in any typical online forum.) Or the virtual realm of online news? Popular film? Academic discourse?
What was that Yglesias quote again? “Right is dominant in policy and cares about culture, left is dominant in culture and cares about policy”?
I get the feeling your comment doesn’t account for how well you’ve got culture locked up.
I think this doesn’t help your point. If you dare wear your tee, you end up offending lots of people you rarely interact with anyway. If someone dares go around with the other tagline, they risk the fate of Justine Sacco.
(Curiously, for a lot of righties, this is just peachy. They hang out in rural communities and small online enclaves. They’ll keep their RL friends just like their lefty counterparts do. But some of them will still look at Hollywood, mainstream media, and the government just as longingly as those lefties look at Wall Street, big energy, and… the government.)
On the gripping hand, yeah. A lot of this is just that SMBC cartoon redux.
Maybe we should just say “SMBC #2939!” as shorthand for “look, we’re just bitter over the other side’s snipers”, so we can move the discussion along.
@Paul Brinkley
Except the right isn’t dominant in policy. Is government spending going down? is the regulatory burden being reduced? is power being devolved to the states? Other than gun control, policy is either not moving or moving left.
The right controls the Senate, controls the White House (after a fashion), and got five justices on SCOTUS, and might score a sixth if something happens to RBG. And if you define right = conservative = slow change as opposed to swift, then every time policy stops or moves only slowly left is a victory for the right. It ain’t all roses on the left.
I think this is most easily explained by the Bryan Caplan theory of the left/right. “The left hates markets; the right hates the left.”
On average, the right has a relatively weak belief in things like restraining government or reducing regulatory burden. After all, when they’re in power reducing those things would reduce their own power. And even “right-wing” voters are more motivated to punish the villain corporation or business owner of the week than they are to unleash more market competition. Or they’d really like to stick it to the left.
Individual business owners themselves are not necessarily concerned with being pro-free-market either, since some regulations help them.
@Paul Brinkley You certainly have a point there, but it’s one from a different sub-thread. This one’s about gratuitously picking on people, not about political favoritism, and some animals being more equal than others. You would, currently, and in the US specifically, have “no special rights for <specifically protected group>” interpreted as a desire to specifically persecute that group, even if your statement were still more explicitly a complaint about Affirmative Action, and would be responded to accordingly.
I’m not going to argue about the desirability or implications of that reaction this deep in a thread, where replies can’t be threaded. If you do want to discuss that, on its own or in contrast to the treatment of Christians qua Christians, start a new thread in any CW post, and I’ll probably rise to the bait and explore the question.
What Nancy appears to be claiming, rephrased in blue tribe argot, would microaggressions etc. being directed at vocal or obvious Christians.
Your (implicit) claim is either about political correctness (taboo speech) or about favoritism directed against you. (I’ll know which only if you start that thread.) Different topic, and more serious, at least on the latter interpretation.
DinoNerd:
“What Nancy appears to be claiming, rephrased in blue tribe argot, would microaggressions etc. being directed at vocal or obvious Christians.”
Sort of. What I was thinking about was generic snark aimed at Christians in general, and Christians who want to keep the peace and/or don’t want to be involved in sticky situations need to not say anything about it.
@Nancy Lebovitz
Ye olde “let’s all agree that those people are outgroup”, addressed to a group containing some of “those people”, with or without the speaker knowing this, or knowing whom.
That used to be the normal experience of gays and others of deviant sexuality, and it it sucked then and sucks for Christians now.
Where I hang out, it’s less common than the othering of gays earlier in my life/career, but I certainly believe you.
The frequency would have to get pretty high for me to get overly concerned, with Christianity not being a valid (legal) cause for someone to lose their job, and essentially never attracting violence (“gay bashing”) – making it much much easier/safer for a Christian to be “out” than it was for a gay person. (Of course this is in the US or Canada.)
I currently experience frequent “microaggressions” (if you want to call them that), in the form of people basically insisting that extroversion is required for happiness, particularly in old age. It’s annoying, and can be depressing – what if they are right, after all – but I’m raising it here only as an example of the random st0ff that everyone gets hit by. An awful lot of people see the world from their own viewpoint, and presume/insist in spite of evidence that everyone else should too.
But I may be being systematically unfair to Christians. I’ve witnessed or been the target of too much bad stuff explicitly motivated by Christianity – even if other Christians would, and sometimes did, call the perpetrators heretical. So I probably can’t evaluate evidence dispassionately. (And this even though 2 of my grandparents were perfectly good/nice Christians, one of them being fairly devout.)
An ideology lives and dies based on how powerful it is perceived and how serious everyone seem to take it. Some ideologies anyway.
If you blasphemize, you show that you don’t take religion sufficiently seriously, you are the enemy. Same with some secular ideologies. If Church tells you to be offended when someone says Damn it or listens to heavy metal, you better be. When someone authoritative enough tells you to be offended at blackface, OK symbol or frogs, you better be as well.
It seems pretty arbitrary because those things alone are powerless, but they can become a symbol of defiance. And defiance leads to satanism. Or racism. Or something.
I think this is basically what the “Culture War” is.
Which culture gets to enforce its taboos on society through both law and social pressure is the essence of it at the core.
I agree with the points in the last paragraph. It’s a complex judgment based on the intent of both parties and culture.
If I think you are offended as a weapon to try and win some policy battle, I’m inclined to judge against you. Likewise if you seem to be cultivating particular sensitivity.
On the other hand, if there isn’t really a larger point behind the action, it’s one that is less presumption of innocence. And then, how reasonable is the action in the dominant culture? Wearing a dress of anther culture, say, is pretty different than burning a book. It’s usually an honor to have something named after you, so I don’t think having a team named “the Braves” should be offensive–but “Redskins” probably is a more legitimate grievance, because we don’t usually refer to people by physical characteristics.
On the other hand, I’ve yet to see anyone take offence at the use of the term “white”.
Not saying it never happens, it just doesn’t register on my radar.
(Personally, I think it’s a useful term in a wide variety of situations.)
Eh, you know, I think you’re right. You can get away with saying “black” too.
And it’s not lumping distinct ethnicities together that is the problem, because white–and black–do that too.
I guess it’s more context, though, since I’m sure a baseball team named “the blacks” would go over poorly.
Maybe it has nothing to do with the name, but the exaggerated iconography? But we’ve still got “fightin’ Irish” don’t we?
Yeah, the rules are hard to articulate.
Best to error on the side of not taking or giving offense.
If that were a general rule, it would be reasonable.
But I doubt that people will stop referring to the War of Northern Aggression as a just war in my presence. 😉
In all seriousness, there are definitely social standards about what is a “reasonable” offense to take and what isn’t. While we should definitely be charitable to people, especially in meatspace, having the ability to control those boundaries is powerful.
The New Zealand rugby team is called the “All Blacks”.
The soccer team is called the “All Whites”.
Foreigners (Americans in particular) do sometimes react poorly to this.
The wikipedia page is unsure of where the moniker “Fighting Irish” came from, but it sounds as though it may have been started by Irish people who were affiliated with Notre Dame referring to themselves. The rules are hard to articulate, but I think one thing that’s generally agreed on is that people have a lot more leeway in what they say about themselves and their in-group. “The Redskins” would probably get a lot less flak if it was actually a team of Native Americans referring to themselves.
“On the other hand, I’ve yet to see anyone take offence at the use of the term “white”.”
Because I can’t resist. Possibly an edge case.
https://www.youtube.com/watch?v=oi0jN-y6vfY
Oooh, awkward…
That said, I didn’t necessarily take away that David Webb was offended at being called white, but rather that he implication that someone with his life history obviously had to be white (and even then “offended” may be too strong a word).
Looking at the bigger picture, he has a bigger point than might otherwise be apparent. I mean: Obama was President for two terms. Assuming that black people can’t achieve pretty much anything they put their minds to after that seems… I don’t know… a bit racist, actually.
This is an interesting article on the modern notion of offence.
The argument is that offence (as used here) is about a slight on somebody’s (or some group’s) honour. That makes being offended on a group’s behalf makes slightly more sense – you’re trying to prevent a certain group from being disrespected rather than share some possible emotional response.
Treating certain groups differently also makes more sense under this interpretation i.e. people [leftists] are trying to prevent groups they think are disadvantaged from being disrespected. Christians aren’t disadvantaged now or in recent history (according to common/standard thinking on the left; I’m not interested in arguing whether this is actually the case) and so this disrespect is less important.
This is a hard one. It can be a very good thing to amplify the voices of people who are not being heard, or (sometimes) use your own relative privilege to point out harm being done to people who don’t feel safe enough to speak out. But it can be a pretty terrible thing when self-appointed allies decide what other people need/want, and insist on giving it to them.
And emotional harms are much harder to judge than more tangible issues. One person’s terrible interaction, leaving them depressed and suicidal, is barely noticed by another person of similar objective circumstances. In general, it’s not a great idea to do things especially likely to cause emotional harms, but at some point demanding that others walk on eggshells starts doing more harm than the behaviours you are trying to prevent.
The internet doesn’t help, bringing together people who e.g. use a particular Anglo-Saxon (sic) word every second sentence, with people who regard the use of that word even once as putting the speaker permanantly beyond the pale.
AFAICT, almost all Twitter outrage storms these days cause a lot more harm than any benefits they may provide. OTOH, I’m happy with e.g. a manager interrupting people who talk over other people, saying things like “I’d like to hear what < person you interrupted > had to say”, whether or not this is the more common pattern of a male talking over a female. I’m quite OK with people not laughing at – and even drawing negative attention to – jokes based on treating some specific group as stupid (etc.), particularly when there are likely to be people of the target group present.
Bottom line – we could all use a lot more tolerance, and a lot less offence, and the more power someone has, the more true that is.
OTOH, if you want to build yourself an echo chamber, taking offence at anything and everything is one way to get a group consisting only of clones. And with the internet being what it is, you have a much better chance of finding people who agree with you on all your pet peeves – or who are willing to keep silent where they disagree – than you would in person, where you might find yourself preaching to a non-existent choir, unless you can bribe people to be your supporters.
The straightforward steelman, I think, goes like this:
1. Everyone agrees that respect for people is important.
2. Respect is not, however, universally due.
3. Ergo, rules for when respect is and is not due are important.
4. Since those rules are important, they require enforcement (assuming they are fair, etc.)
5. That enforcement must be universal, given the above.
6. Ergo, if you see someone hurt by a rule violation, you must try to defend them. QED.
Because of #6, people will say offense to one of us is offense to all of us. More accurately, I think one is not offended on behalf of others, so much as they’re offended by someone breaking the rules for respsect.
However, nowadays, the weak point is #4 – many people dispute whether the rules are fair. This is important, since if the rules aren’t fair, there will exist people with insufficient incentive to follow them. Anyone focused on #6 is going to be disappointed if they ignore #4.
And so we have this rule where you can punch up, which people at the top have no incentive to follow.
I can carry this further. There are meta-rules for figuring out fair rules, and one of them is “don’t abuse the rules”. Rules have a spirit, and if you break that spirit while following the letter, you’re in arguably even bigger trouble than if you’d just broken the letter, because you’re acting willfully.
And so we have people pattern-matching ordinary acts to acts previously ruled offensive, uncovering vast amusement parks of offense. Some of that is willful, some lazy. But they remember to be “offended on behalf” and keep going.
And so, incidentally, we also have people who will step right up to the brink of offending, say, Christians, without being outright offensive. And we have people who will think “ohh, they’re devout” and just stop inviting them to dinner, or offering to watch their kids… muttering that much more when they complain about something… passing over them for promotion because they’re “not a good fit”…
Which is not to say everyone ought to be required to invite devout Christians over for dinner. Rather, it’s that everyone ought to not mutter evasively about disrespect, or offense – or their fashionable synonym, “microaggression” – when it happens to anyone else. (But it’s probably worth noting there’s a rift there, and asking how badly everyone really wants that rift to close.)
Meanwhile, the respect system is still there, as a reminder to give fellow humans the kindness that encourages reciprocation and good humor. To violate its norms is to invite its collapse.
Politeness norms are great, but they should be universal, right? If you know something offends me, even if you think it’s a silly thing to be offended by, it’s polite to avoid it in my presence. What we’re talking about is a set of norms in which offending some people is very bad, and offending others is a positive good.
When someone wears blackface, people will respond with offense even if they’re white because they’ve learned that it is offensive. When a gay couple kisses in public, the same people will be offended at the old man who is visibly upset by this (by the standards he grew up with) highly offensive display. There’s something going on here, but it’s not politeness.
Agreed. But in turn, we’re assuming that your offense is in good faith. We both recognize that it’s possible to feign offense and control a great deal of behavior that way, and so we recognize the need (in a Schelling sense) to avoid even the appearance of doing so. Offense has to be based on trust, which is fragile, and therefore requires care.
This may mean sacrificing some offenses. If I think your tie looks tacky or your style of naming variables in your code offends my sense of elegance, it’s better for me to just suck it up than to try to make you change, risk mistrust, and then deal with you saying you’re offended by my combover, say. Or, it may mean that I have to make sure it comes across as an “effortoffense”, and be prepared to explain the nature of offense in convincing detail. Bonus points if it sounds like natural conversation; the presumption is that we’re trying to be friendly, not curt. (Sorry to any Curts out there.)
So:
This is learned, and can only be argued by authority, so I think they need to suck it up. (If, OTOH, they’d learned the history of minstrel shows well enough to grok the offense, then they can put effort in and it works.)
Some people are bothered by PDAs; if they are, they can probably be convincing about it, but they can probably just turn the other way, too, unless it’s noisy. So, case by case. Offense at the old man’s offense might be offense at the old man being careless with trust, again, depending on the case.
To some extent, negotiating offense is just going to feel weird, because it’ll ground out as a rational discussion about irrational gut feelings. OTOH, if we SSCerati succeed in our goal to convert the world into ratbots, we will also solve the problem of offense, and can turn to more productive tasks like paperclip making.
Given that it is not in fact reciprocal, let it collapse.
And live in a society resembling 17th century French court? How well do you expect that to go?
I know respect systems have worked. I think I’d rather promote them and maintain them.
There is no such thing as something that is offensive whether that be word or deed. Offended describes how someone behaves who chooses to act as if they were offended. A person will act as if they were offended whenever they think it is in their best interest to do so. It is their choice. We should generally ignore them. It is not any of our business. If we choose to act offended on their behalf it is to promote ourselves. These are all good reasons to not have friends. It is all way too much trouble. I do not get offended because I never choose to. I cannot imagine acting offended on behalf of someone else. These are all good reasons to have no friends. Surely I can find something to do besides talk. What a waste of time!
I think your examples can be (mostly) divided into three primary categories:
1. A direct attack on the offendee’s group. The offender intentionally insults a group and/or desecrates their symbols. This includes a, b, d, and i; and e or f if the offender intended to mock or denigrate.
2. An unintentionally poorly-taken reference to the offendee’s group. The offender says, does, or names something in reference to a particular group that they think is fine, but at least some members of the group find it offensive. This includes e, f, and g if the terms or aspects of culture were intended to be used neutrally or respectfully.
3. The “offenders” are simply trying to live their life and make no particular reference to the “offendees”. This includes examples c and h–saying “I believe LGBT folks have the right to marry and to be addressed with the pronouns of their choice!” doesn’t mean “I hate conservative Christians!”, though it does necessarily imply “I fundamentally disagree with conservative Christians on an important issue.”
Obviously, I think that the examples in category 1 warrant the most direct and/or secondhand offense, while category 3 warrants virtually none at all.
As I bystander, I think the proper response to category 2 is usually “woah dude, I know you didn’t mean it like that, but some people find that term/costume/name really offensive.” Hopefully the slight was truly just a misunderstanding and can be resolved with minimal drama. However, “screw that, I think I’m being perfectly respectful and it’s only a tiny nonrepresentative group who are acting offended” is sometimes a valid response.
Category 1 is in some ways thornier, since the intent clearly is to offend. Here, if you get involved at all you’re going to have to explicitly choose a side, e.g: Do you care more about the reverent treatment of religious symbols, or about comedians’ ability to criticize and mock religion? Once you’ve chosen, chastise and support accordingly.
Any perspectives on ADHD in younger children?
My five year old (kindergarten) is showing a lot of the classic signs and several of the education folks at school have expressed concern. I’m being told to wait until the end of kindergarten for evaluation.
I don’t really see the point to this. If therapy or medication can benefit, why wait? Do they expect him to grow out of it? What’s the concern?
My biggest concern is that, due to some of his struggles, either he or the school will form the notion that he is “bad student” and that this will hamper his further development.
Unless your school is very small I would not worry about this at the kindergarten level. I probably wouldn’t worry too much about it even if it is a small school. People love a redemption story, so if it turns out the kid does have ADHD and in a year or two he gets proper treatment/medication and he turns into an excellent, attentive student that might even be better than if he had been a good student all along.
As far as “why wait”, 5 years old seems awfully early to start treatment. Doesn’t basically every 5 year old show some signs of ADHD?
**Disclaimer about my opinions: Obviously I am not a doctor. I did receive an ADD diagnosis sometime around the 7th grade. Took Concerta for about a year, hated it and never took any medication again. 2 degrees (admittedly both are undergrad, but whatever) later I think I’m doing alright. I’m not opposed medication for ADHD in principal, but I believe it is massively overdiagnosed and that even for legitimate diagnoses it the medication is over prescribed.
I started on ADD meds during the summer before 1st grade. I still remember the first day I took them. I had been struggling to read before, but about 15 minutes after the first pill, I sat down and read through a stack of books. I’ve been on some form of medication (first ritalin/concerta, now modafinil) ever since. No notable long-term effects from starting that early.
Not sure what to say over starting now vs starting later. I’d suspect that you won’t get pigeonholed too much, although it depends on exactly how it manifests. Mine was very much of the “staring into space” variety back then, and I was smart enough to get my work done, so I didn’t have trouble in kindergarten. It later became more outwardly weird, but I suspect that was rebound/withdrawal from the ritalin based on how I am on modafinil.
My brother has ADHD. A specialized tutor helped him a lot with reading and math skills, but that started in (I believe) second or third grade; prior to that there isn;t really a lot of actual learning that’s done in school, as opposed to socialization and daycare. Similarly, many ADHD medications do weird things to brain or metabolic function; best to put that off as long as possible in a growing child. I forget which medication he ended up taking, but it seriously suppressed his appetite – he ended up very underweight and had to follow a diet plan.
Also, mental health care generally, and particularly childhood ADHD, is really, really, really, unimaginably bad at both the specificity and sensitivity parts of accurate diagnosis; Scott has a post on this somewhere, I believe.
For one, it’s because of the way kids develop. Every kid under 5 would meet diagnostic criteria for ADHD – according to the instruments used to diagnose ADHD, their attention span is “low” and they are “impulsive and hyperactive.” Between 5 and 6, you can diagnose ADHD but it should be done carefully. So, your kiddo is right at the cusp of being able to be evaluated.
For two, it’s because stimulants will improve any person’s degree of focus, whether they have ADHD or not. So, even if your kiddo has improved focus with atimulants, it doesn’t mean they have ADHD.
You know your child better than anyone. But as a bystander, I would want to reassure you that even if your kiddo has ADHD and goes untreated – and even struggles – for a while, they’ll still be okay. Medicines help. And when that happens, they will see that they’re not “bad,” they just benefit from certain treatments.
So it’s worth taking this step by step and not rushing. Talk to your pediatrician, maybe initiate an assessment, see what it tells you, and go from there. Another option is psychoeducational testing – the school is required to provide this upon request, at no charge.
Thanks for your thoughtful response. To be clear, I am definitely not rushing in to anything, this is a long going conversation of several years. I am trying to avoid being irrational in either direction: “drugs are bad! it’s just a phase” or “he’ll be hopelessly left behind if he’s not doing calculus by the end of kindergarten.”
That being said, tt is not at all clear to me how your response speaks in favor of delaying treatment/diagnosis.
If medication will help my child focus and be happier and more comfortable in school, but he doesn’t “have ADHD,” why is that a problem?
If he is going to “grow out of it,” he’ll do so whether he takes medication now or not, but he’ll have been happier throughout.
Is there a specific answer to this question, other than vague assertions that we shouldn’t medicate if we don’t have to?
First and foremost : see your pediatrician. It seems like you have been really worried. In order for someone to help you, they would first have to ask you a lot of questions. However innocuous those questions could be, it’s not appropriate for random strangers to ask you those questions… or to know their answers!
That said, he may be more focused on stimulants, but it’s not clear that he will certainly be happier. Kids who take stimulants may have appetite suppression. They may have headaches. And they may exhibit less personality – the drugs work by turning up the “executive functions” dial, and sometimes people are so hyperexecutive that they get kind of robotic. People don’t like that and they sometimes prefer the ADHD.
These adverse effects may be acceptable if the drug helps the kid recover from a deficit. IE, there is what you call a “therapeutic balance” : the good effects outweigh the bad effects in a way that A. is meaningfully good to the patient, and B. allows their physician to make correct predictions. So if they have a diagnosis of ADHD, we can predict they will have trouble, and we can treat to reduce the probability of that trouble, which is what the patient wants.
If there is no such deficit… then there isn’t a reasonable, non-Faustian way to determine that therapeutic balance. IE, you can balance out “trouble from disease” and “trouble from medicine,” but you can’t balance out “doing fine” and “trouble from medicine.”
The latter case (fine vs adverse-effect-but-better-than-fine) is no longer a medical problem. It’s an engineering problem. Nootropics are a different kind of gamble.
So he needs a diagnosis.
This was pretty much exactly why I discontinued when I was younger.
Thank you – this was a very cogent response and makes a lot of sense. I am asking random strangers for perspective, not for an answer. Of course I will consult with our pediatrician and other relevant health professionals.
What touched off my initial question is that I was specifically advised not to evaluate until the end of the school year (coinciding with his 6th birthday).
My takeaway from this is: 1) I should not avoid evaluation, because evaluation and treatment may help. I now understand where the advice to avoid evaluation is coming from, so that helps me understand why its not for me.
2) I should be very aware that a diagnosis (and therefore prescribed treatment) may not be right at this age (or any age).
3) If medication is prescribed, I should be extremely watchful with respect to therapeutic balance.
Not a doctor, but as a parent I’ve read things out on the internet about it. It seems like, in the US at least, they tend to wait til 7 to diagnose it.
One example that may lead a normal child to get diagnosed: boys mature slower than girls, and younger kids struggle more than older ones with sitting quietly and focusing. Thus, yours is a December boy, evaluated against the standard set by almost entirely January girls in your child’s school cohort, he will seem behind. (Obviously this is extreme, but you get the idea.)
I don’t think it is a problem to get evaluated, as LONG as you don’t let the school pressure you into starting medication or placing him into “Special Ed” – an abyss from which it seems hard to climb once there.
Also, luckily, it seems like getting evaluated and diagnosed does not equal medication: there’s all sorts of literature and providers out there who look at medication as last resort, and will recommend more physical exercise, reducing screen time, instituting routines, reducing sugar, delaying school by a year (i.e., start in 1st grade instead of Kindergarten) before prescribing the meds.
There doesn’t seem to be any evidence that ADHD meds can do more harm to kids rather than adults. There is much greater risk aversion as far as giving meds to children, which accounts for the reluctance, but this doesn’t seem too rational. On the other hand, the benefits are also essentially non-existent. Unless you are planning on tiger-parenting your kids, any head start they get now isn’t going to last to adulthood, just as the official program Head Start does not last to adulthood. And you’ll have to deal with paying for treatment and medication and dealing with all the bureaucracy, so ultimately I’d say no, don’t do it. See this if you have great trust in the “education folks at school:”
https://www.health.harvard.edu/blog/younger-kindergarteners-more-likely-to-be-diagnosed-with-adhd-2019011215756
It is not clear that ADHD is an innate disorder rather than being created by the school system itself. There was a Harvard study recently that claimed a 30% increase in ADHD diagnosis by birth month across the enrollment date (ie the kids born in the last month possible while being in that class had 30% higher diagnosis rates than the kids born in the first month), which would be pretty damning on its own.
Additionally ADHD diagnosis has been increasing for the past 20 years, and at a fairly linear rate (which is not what you expect if the increases are from refining diagnosis and catching more marginal cases). This is correlated with an increase in schoolwork for young kids and pushing learning back earlier and earlier.
My (layman, but reasonably informed and highly interested as a home-schooler) opinion is that schools currently are actively preventing natural development paths and these are causing significant issues for many kids. Focus and attention should be viewed as skills to be learned and not paper over with medication (except as a last resort).
This, if accurate, represents the best argument I’ve seen to avoid evaluation/diagnosis at a young age. If I understand correctly, you are suggesting “ability to pay attention” is more of a trainable skill than is accepted by the current paradigm. This suggests that medicating for ADHD may actively prevent the development of focus skills because reliance on the medication renders such skill development unnecessary.
Is there any research to support this proposition?
I would say that another possibility is that even if it isn’t so much “learnable” (it probably is to at least some extent for the typical person) it is probably a part of childhood/brain development. Given that there are plenty of other areas where we don’t expect 5 year olds to be fully developed, I’m not sure why “attention span” or various other ADHD metrics would be one of them. Some of those kids are maybe just a year or two behind. The problem with starting medication at that age is…you never figure out if they were just going to grow out of it.
I guess every couple years you could stop the medication for a while and see if they can still hack it (having grown out of it) but at that point it would be a little surprising if they could since they’ve never had to try to operate without the medication boost.
That said, I don’t think you need to avoid evaluation at that age, just to be wary of immediately jumping to medication or of assuming that any resulting diagnosis is a permanent state for a 5 year old developing child.
I would say that focus and attention have aspects like weight, where there is both a strong natural tendency plus the ability to interfere for a different result (within limits).
Thinking about and rereading what you wrote I wanted to clarify something by way of example:
There is research from the 70s (I don’t know if it has held up or not, just an example) that claimed early reading and having to focus on small words on a page caused vision problems in kids which required glasses to correct them. There are a couple of possibilities assuming the first part is true.
1. The damage was more or less permanent and glasses were pretty much the way to go.
2. The damage would be reversed with time without glasses.
3. The damage would be reversed with time without glasses, but only if the kids stopped reading for a long enough period.
I don’t have an opinion yet on ADHD and if it requires medication etc once it is inflicted, but I want to be clear- ADHD as a diagnosis came with other behavioral issues beyond a lack of focus for many kids. I think that the school system is actively causing damage to kids who are pushed beyond their developmental level to much, not simply that ADHD is a descriptive term for how the kids would behave in a better environment.
If a child is showing signs of ADHD it might be any of
1. Its just an age thing and they will grow out of it with time.
2. Its the early stages of ADHD caused by something other than the school system.
3. Its the early stages of ADHD caused by the school system.
In the case of #3 just waiting and observing would likely not have the desired effect (or might but at an unacceptable rate, lots of variables), but it could be that at some point ADHD medication is the best option, like glasses would be in the above example.
I agree with everything but this conclusion.
We agree that schools are putting kids into a situation that they’re not adapted for. We agree that people can learn to function in unpleasant environments.
But this doesn’t imply that there’s any virtue in learning to tolerate unpleasantness for unpleasantness’ sake. Often, it’s simpler to remove the unpleasantness.
For instance, my water-heater died. I’m sure I could learn to endure cold-showers. But I’ll just fix the water heater.
Similarly, now that I’m mature, I CAN endure long morning meetings without coffee. But why would I choose to do that?
My disagreement here is that your water heater is static, more or less it works or doesn’t work, and fixing it doesn’t impede its growth (because it has none). The correct option is to do your best to improve the environment for kids rather than keep them in an unhealthy environment and give them medication so that they don’t notice it.
I was diagnosed with ADHD at a young age.
The doctor prescribed a dose that was — in retrospect — unreasonably high. This became a problem because: the doctor was an authority figure, I tried to be agreeable in general, and my parent had just finished explaining how I was a broken disappointment.
The result was that I didn’t advocate for myself and so spent several years on a dose of ADHD medication that was fairly unpleasant and just accepted the side-effects as normal. If I’d spoken up, I suspect my treatment could have been modified to be unpleasant.
This is not an argument for or against diagnosing early.
But, if you’re going to put your 6-year-old into ANY sort of long-term treatment, you have a an duty to be really, really, really proactive about questions like “how does this medication make you feel?” and “no, really, how does the medication feel?”
> If I’d spoken up, I suspect my treatment could have been modified to be unpleasant.
*less* unpleasant? Or a shift from ‘fairly unpleasant’ to merely ‘unpleasant’?
Summary of The Body Keeps the Score, a book about the debilitating effects of trauma and ways out the effects.
What would a society which made major, effective efforts to minimize trauma look like?
If the body is that essential for the good functioning of the mind, uploading is going to be even harder than it currently appears to be.
Read and weep. But most likely it’s not as bad today.
https://en.wikipedia.org/wiki/Auguste_Ambroise_Tardieu
(Hmm, the article on wiki doesn’t give justice to him, in the way that it doesn’t paint accurate picture of his findings, limiting itself to mostly sexual abuse. His findings were also about even more widespread physical abuse)
@Nancy Lebovitz
The book (or the reviewer) seems to make the classic mistake where when A can cause B, they conclude that B means that there is A. They even go so far to argue that learning is a sign of trauma!
When being human is equated to being traumatized, then the solution to remove trauma is…
Mass graves or dehumanizing people by wireheading.
Many believe that moderate trauma is necessary for growth and the absence of trauma is very damaging, like too few pathogens can cause lots of auto-immune disease.
Can you have love & friendship if you try to eliminate even moderate trauma?
On the other hand, if a problem is presented as overblown, the problem still might be serious.
Suppose that minor trauma is a part of life, but major trauma is destructive and somewhat avoidable.
I’m not saying that the problem isn’t serious. I’m arguing that presenting completely normal human behavior and experiences as severe trauma is a very bad idea.
> What would a society which made major, effective efforts to minimize trauma look like?
Scott’s fiction piece “https://slatestarcodex.com/2018/06/19/the-gattaca-trilogy/” has a paragraph on this. Read from “Anton was five years older.”
@ b_jonas
Thanks– I’d missed the GATTACA piece.
Have some related sf.
By Sheckley (title forgotten): Everything in life depends on your mental health score. Showing anger will lower your score. Eventually, he can’t pass for calm any longer and he’s imprisoned for life in VR. Faint memory– there’s some looming threat, and anyone who’s aggressive enough to face it is imprisoned.
By Tom Purdom (title forgotten): There’s psychotherapy where actually works. It’s so expensive people are wrecking themselves trying get enough money.
By Margaret St. Clair (Rations of Tantalus/The Rage): People are required to take tranquilizers to prevent fits of rage, and the viewpoint character can’t get quite enough of the tranquilizer. It turns out that the tranquilizer causes the fits of rage, and the other part of the problem is not being allowed to show normal amounts of desire and frustration.
What’s a good period for dollar cost averaging? As in, I have a pile of cash, and want to put it into stocks over X years.
Bonus question: how does the answer change if instead of cash, I have an insufficiently diversified stock portfolio, and want to move it into ETFs?
It is my understanding that dollar cost averaging is not meant to tell you over what time to invest a specific pile of cash, but to tell you that there are significant benefits to continuous regular investment, regardless of where the market is at any given time.
Good question. With the money in hand, and not earning much of a return, there’s a bit of a tradeoff. You don’t want to invest it all at a market peak, but you don’t want to stay out forever. It looks to me like we’re currently having scary corrections every couple of years, and a recession or crash approximately once a decade. I’d use those numbers to decide, and probably err on the side of shorter – so maybe 1-2 tyears. But I’m just a random casual investor, not any kind of expert.
With insufficient diversification or other portfolio rebalancing, I’d move faster, because this is a regular event. My ideal for stock from the company I work for (stock plan or RSUs), is to just routinely unload it at the point where its gains become “long term”. But if there’s a lot of money involved, I might sell a chunk of stock a week until I’m rebalanced, rather than all of it at once. (I also know the company cycle – there’s usually a dip twice a year right after the stock plan delivers stock to employees, because of immediate profit taking, so I try to sell the month before.) And I’m sloppy – I never seem to manage to rebalance on time.
Why wouldn’t you just put the whole thing into stocks immediately? Or put half in stocks and keep half as cash?
Well, I don’t simply want to maximize the expected value of my investments. I want to maximize the expected utility of my investments, which is, uh, proportional to the logarithm of the value…
Haha, just kidding, the expected utility is determined mostly by whether I invest right before a crash and get totally pissed about that.
The standard defense: If your timeline is long enough for the money to be in stocks, crashes don’t matter, just wait them out and eventually you will be better off.
There’s probably not really a satisfactory answer. I have seen many people suggest half in a lump sum and half in installments over a year. But on average 100% lump-sum-today will outperform, at slightly higher variance and larger risk of regret. Time in market, and all that.
It’s a well-known fact in politics that the other side lies and my side is honest. Left-wingers believe the right-wing politicians and press lie, and right-wingers believe the left-wing politicians and press lie. I am a left-winger and, surprise surprise, I believe right-wing politicians and press lie.
But I think this is actually true, and not just a product of my bias, at least here in the UK. Here is a summary of violations of IPSO (press regulator) rules by various newspapers in the UK. The second plot in that link shows specifically inaccuracy rulings. All of those papers are right-wing with the exception of the Daily Mirror, a left-wing tabloid paper. The Daily Telegraph and The Times are supposed to be serious newspapers, while all the others (including the Daily Mirror) are considered lower quality papers.
Apparently IPSO doesn’t cover the Guardian or the Independent, which are generally considered to be “serious” and left of centre. But here is an article in, um, the Guardian (ok, I know how this looks, but its data comes from an independent source) saying that the Guardian is the most trustworthy, accurate and reliable newspaper in the UK.
So is this just another manifestation of me believing that “my side” is better and ignoring (or being unaware of) evidence to the contrary? Or is it actually true that right-wing sources are more prone to lying to further their agenda than left-wing sources? Is it true in the UK but false elsewhere?
It’s not like the Guardian and the Independant talking out their arses is unheard-of.
And anyway, smart people don’t tell flat-out lies. They weave cherry-picked half-truths with speculation to create a narrative that nobody can poke a definite hole in yet doesn’t reflect what the real world is like.
And I’m talking about both sides here.
Or even entirely true things that are made important.
Mass shootings in the United States are a good example for the left. They are rare and statistically basically dangerless to the average American. Being killed by a rifle in the United States is less likely than being killed by a blunt object (club, etc).
Or immigrant killings, if you want to take a right-wing talking point. First generation immigrants are substantially less likely to murder than the average American.
But without telling a single lie, just by emphasizing every single mass shooting/immigrant killing, you can create a narrative that the United States is a shooting zone for the innocent/flooded with dangerous foreigners.
What a valuable perspective, thank you. I think sins of emphasis are one of the most important things to keep in mind when navigating today’s world.
Another phrasing of this is that OP may be objectively right about their claim, but using that metric as a measure for “news source goodness” is a subjective choice.
Note that this is essentially the same analysis that Chomsky has made about the media’s foreign policy coverage for many, many years (with the addition that he added that most of this selection process is unconscious).
I think it’s all mostly unconscious regardless of the topic at hand.
I think that’s why practically all the noninterventionist politicians or pundits in the U.S. tend to be considered whacko along some other random direction. The selection process for stories favoring “America should spread freedom and democracy” is just so strong that the only significant people challenging it are fringey weirdos. Tons of paleoconservatives, communists, libertarians, and anarachists would prefer a much more isolationist policy, but outside the fringed everyone is pretty pro-intervention.
The blade itself incites to violence. Indeed.
Yeah, the cardiologists/Chinese robbers thing is definitely a huge issue. But I think it’s the case that the Mail, Sun etc tell flat-out lies in addition to that.
A common pattern I’ve seen is reporters who tell a story that follows some comfortable/desired narrative, and then either omit the contradictory facts that would undermine the narrative, or downplay them. NPR seems to leave them out or downplay them; the New York Times seems to like to stick them in the last couple paragraphs of the story.
Funny story,
As a young man, I listened to NPR every day.
A while back one of their stories was basically:
“Innocent black man shot by cop; he didn’t have a gun.”
I thought that was some very weird phrasing, and looked it up in the newspaper. Their version was:
“Man shot for attacking cop with knife.”
I haven’t touched NPR since then.
I believe it’s entirely possible.
There is no law of the universe that society matches reality at the mid point.
But there’s a couple of meta-problems.
Lets say we we both design a protocol that can be carried out by a soulless unfeeling automaton to examine data on the issue in some way.
One of us gets back the result and one of us the results say the left is worse, the other says the right.
Lets say our actual methods are equally “good/bad”
Now we want to let the world know about our findings.
What are the odds that one of us will find it harder to publish in a scientific journal or in the guardian?
I remember an old story about a researcher who found that one side of the political aisle showed on average more [negative personality trait].
They were getting lots of media attention and citations.
Then a minor flaw was found in their data analysis, left and right columns had been swapped, the exact same thing applied but to the other political persuasion. Suddenly the citations and mentions in news articles dried up.
Re-run this a few hundred times.
So imagine a hypothetical reader, they search the news and the literature. what do they find? probably a large dataset highly biased by the views of publishers and academia.
According to the wikipedia article on the IPSO (https://en.wikipedia.org/wiki/Independent_Press_Standards_Organisation):
Which makes me think “what would be different if the IPSO was a sockpuppet dedicated to give negative publicity to the competitors of their members?”
That’s not saying I give much credit to the daily mail, I don’t even know what the members of the IPSO are, but that’s only weak evidence by itself.
Yeah, it’s self-regulating, so the newspapers in the article (including the Daily Mail) are members of IPSO.
Thanks, this mostly clear my objection to the legitimacy of IPSO.
In no way is Guardian more reliable than Financial Times, so your source is, um, not trustworthy (it conveniently omits FT). All British online media except FT and BBC seem, from my continental perspective, blatantly biased toward their prefered political agenda, much more than serious American ones.
Yeah, good point; FT is pretty good quality and it’s a shame they were excluded. Maybe it’s because their readership is smaller? BBC these days is basically just another mouthpiece for the government.
FT is no more excluded than The Guardian. They chose not to sign up to IPSO.
Note that IPSO is itself a rival to IMPRESS, which is another regulatory agency that no big newspaper signed up for, but whose membership gives an exemption to the GDPR, because they have official approval.
So there are (political) layers here, where membership of these organisations may be an attempt to fight regulation and/or to get a competitive advantage over informal journalism.
I think AlesZiegler may have been referring to my second link, which listed the Sun, Mail, Times, Express, Telegraph, and Guardian. The source is OFCOM but I haven’t tried to chase down the numbers nor why these papers were chosen.
> IPSO is funded entirely by the shadowy Regulatory Funding Company (RFC) which is dominated by a handful of national and regional publishers.
> The RFC writes the rules which dictate what IPSO may or may not do, and (as then RFC chair Paul Vickers made clear to the House of Lords Communications committee) must approve any rule changes.
> IPSO’s rules are therefore written and controlled by the very newspapers it purports to regulate “independently”.
Source: https://blogs.lse.ac.uk/mediapolicyproject/2016/10/31/impress-vs-ipso-a-chasm-not-a-cigarette-paper/
If anything this makes it even more damning for the IPSO members such as the Daily Mail who have been found to be full of factual inaccuracies.
Also, bear in mind that IPSO does not independently audit newspapers at will, but it responds to complaints from the public.
If the progressive left are more likely to make complaints about inaccuracies in the right-wing press than vice versa (and I suggest that this is the case) then you will necessarily see more findings of inaccuracy against the right-wing press than the left but that is not necessarily reflective of reality.
I think it probably is the case, but it would be the case if the progressive left was made up of disproportionately more fact-conscious people than the right. I suggest that this is the case.
I would suggest that both the right and left have preferred policies and worldviews, and that both the right and left emphasize facts which support their policies and worldviews, and downplay or deny facts which do not.
What makes a person right-wing or left-wing is their temperament: do they value order over novelty, safety over adventure, etc…, not whether they are more “fact-conscious”. To the extent that fact-consciousness is a thing, I would expect it to be normally distributed across both the left and the right.
To paint your ideological enemies as allergic to facts is lazy and trite, and suggests to me that your exposure to them is done through an ideological filter.
And yet there are successful movements in politics right now that are heavily based on the confident assertion of factual inaccuracies. I am opposed to such reprehensible dishonesty and so the charlatans in question are my “enemies” in some sense. Do you see the problem this poses? Any liar, however brazen, can ignore those who challenge them as “painting them as allergic to facts, which is lazy and trite”.
Like this?
Most obviously the climate catastrophe movement, which confidently asserts negative implications of climate change enormously larger than the IPCC projections.
Two of my favorite quotes from an IPCC report:
Devil’s advocate:
This would also be the case if the right-wing was made up of disproportionately more fact-conscious people than the left, such that they were more likely to report problems in even right-leaning publications.
Indeed, you would EXPECT publications that catered to more fact-conscious people to get more complaints about factual inaccuracies than publications whose audiences care less about getting the facts right. And you would expect left-wing publications to mostly be patronized by a left-wing audience, and right-wing publications to mostly be patronized by a right-wing audience, so…
I have no idea if any of this is true, obviously, but that’s the point; it’s easy to construct a plausible story for any particular observation. Making up narratives that confirm your own biases is a trap, and it’s not a hard trap to walk in to, because the bait is very, very tempting.
Good point well made. In my defense, I was mainly offering my “just so” story as a counterpoint to that of Fitzroy.
It’s not particularly difficult to find an issue where one “side” is lying and the other is telling the truth. The problem is that there are lots of issues and each side is likely lying about at least some of them.
Additionally, the worst lies are always told using the truth- or rather, a portion of it, with the inconvenient bits omitted.
Yeah, we could back and forth on examples of lying but what’s the point? There is no grand arbiter of lies, so the question isn’t going to be resolved.
@fion
There is a difference between lying and telling falsehoods. Telling a falsehood that you sincerely believe is true, is honesty of the deceptive kind.
The other side is more honest than you* think, because they actually tend to believe things that you considers so obviously false that only total idiots could believe it. Since they seem capable of dressing themselves, people tend to conclude that the other side is being deceptive. Your own side is also more honest than the other side tends to think, because you honestly believe things that the other side considers so obviously false that only total idiots could believe it.
Both sides cherry pick evidence, typically believing that the evidence that supports the other side is poorer.
I could go on and on.
* No matter what your side is.
The actual reason is that IPSO only regulates those who sign up for it, which The Guardian didn’t. I don’t understand why you would trust IPSO to tell you anything about how accurate newspapers are, when newspapers can just decide to not participate.
There are a ton of possible explanations of why right-wing newspapers top the rankings, other than that right-wing newspapers are more often wrong/deceptive, for example:
– Right-wing newspapers are more masochist or more interested in being correct, signing up and/or staying signed up even when they know they will face many corrections
– IPSO has a left-wing bias (most newspapers seem to be leftist, UK newspapers make the rules for IPSO, 1 + 1 = ?)
– Right-wing newspapers are less likely to correct stories without intervention by IPSO (most investigations seem based on complaints)
– Right-wing readers are less likely to complain than (a subset of) left-wing readers
– Left-wing media state things in a way that is equally deceptive, but not technically false (or the falsehood consists of a quote, which is not rebutted)
I was able to very quickly find an article where The Guardian proves its own statement to be a falsehood in the next paragraph of the same article (bold is mine):
When asked to provide evidence that mothers were making up abuse claims, she said she had personal experience and “submissions from people that this is the case”.
The claim is a prominent grievance among men’s rights groups, but has been widely discredited in multiple studies.
According to researcher Jess Hill, who has authored a book on domestic abuse called See What You Made Me Do, one of the most thorough studies on false abuse allegations from Canada found that non-custodial parents, usually fathers, made false complaints most frequently, accounting for 43% of the total, followed by neighbours and relatives at 19% and mothers at 14%.
I suspect that this mistake is because The Guardian didn’t listen to what Hanson actually said, but interpreted it as something very different based on a stereotype & then set out prove that stereotype false.
I followed The Guardian a bit in the past and they got called out for their biases and mistakes in the comments a lot, until they closed the comments for most (if not all by now) articles.
You make a lot of good points, and unfortunately I need to leave in two minutes so I don’t have time to try and respond properly, but I do want to disagree with your statement that most papers seem to be leftist. The Guardian and the Mirror are on the left, and some people make an argument that the Independent is as well, though if so it’s only very weakly, but all the other papers (pretty much) are on the right. The Mail, the Express, the Telegraph, the Times, the Sun, the Star… Maybe we could make an argument that the FT is near the centre, but definitely on the right of centre.
Are you talking about the papers that were features in the article you linked or the general newspaper landscape? In most Western countries, the general newspaper landscape seems to lean left relative to the populace.
Then again, the UK is very tabloidy, so they might be different.
I was only really talking about the UK (but I think what I said is true of the “general newspaper landscape” in the UK).
It’s interesting that we seem to be such an outlier (assuming your impression is correct).
Reality has a well-known liberal bias.
RationalWiki may well be the least self-aware site on the internet.
Also relevant
https://en.wikipedia.org/wiki/Wronger_than_wrong
I’ll say it again: media bias isn’t just in what a source says that might be false, but also in what it doesn’t say, that might be true.
(Some of this is touched on above. A source can and will leave out context, including context that will make their core story much less important than it’s reported to be.)
Are you using the definition of “lie” that means only making explicit and provably false assertions of fact, or the definition of “lie” that means acting with malicious indifference to the truth such that other people end up believing falsehoods?
Because the left basically controls the journalism schools, along with the universities in general, and mostly doesn’t even bother to deny that any more. Journalism schools are where reporters learn how to present whatever view or narrative they are pushing while carefully avoiding explicit assertions of provably false fact. And how to edit newspapers so as to make sure individual reporters don’t slip up on that point. So they wind up pretty good at avoiding type-1 lies in their journalism.
The right gives the left no credit avoiding type-1 lies when they catch them in so very many type-2 lies. And they don’t bother as much avoiding type-1 lies themselves because A: they are at least consistent in not caring and B: they don’t have as much access to first-rate journalism graduates.
The amount of false stuff you’ll wind up believing if you uncritically read a “right-wing tabloid” is probably not too far off what you’d wind up with from a “serious left-leaning newspaper”, but with the right-wing tabloid it’s easier to apply critical thinking to catch the lies and easier to blame the right-wingers for blatantly lying to you rather than admit you were fooled by clever half-truths.
I think this is a pretty good analysis… but isn’t it worse to blatantly lie than end up producing clever half truths?
Epistemologically, sure, but possibly not in practical terms – depends on the lie and the half-truth and who believes which.
Blatant lies may also be easier to disprove than ones obscured by some facts, half-naked truth notwithstanding.
Why? They both produce the same undesirable end result, they both reflect the same malevolent intent, and the clever trickery is usually harder to catch.
Mostly, I think we give clever trickery a pass because we want to be able to use it ourselves without thinking poorly of ourselves. That’s not a good thing.
It’s also not good that we are producing a generation’s worth of people who, when caught telling type-2 lies, will be outraged and indignant about being treated as liars and will support each other in this outraged indignation. We’re replacing respect for the truth with respect for cleverness in deception.
Yes.
Yes.
Hard disagree. I think the vast majority of such cases are ones where the people involved are sincerely misleading themselves just as much as they are anyone else.
EDIT: Actually I also don’t think I agree with the first claim, now that I think about it. Believing things that are obviously and verifiably false seems worse to me than believing things which are misleading because taken out of context, or something along those lines.
We’re talking about people believing things that are objectively and verifiably false in both cases. The “misleading” statements, mislead people into believing something more than what was explicitly stated – that’s pretty much the definition of misleading – and that something more is objectively false even though the misleading statement was just fuzzy.
And if you put yourself forward as a journalist, then no, you don’t get a pass on that because you “sincerely believe” the false thing. You’re supposed to have been the one who figured out whether it was true or false, so you could tell the rest of us.
And really, if we find that you always very carefully stop short of explicitly stating that false thing while you keep “misleading” other people into believing it, then we’re going to be skeptical of the bit where this was allegedly a sincere mistake on your part, because how did you know exactly where to stop?
It seems to me that the logical extension of your argument is that good journalism is impossible.
There have been, in the entire history of the human race, precisely 0 people who are immune to cognitive biases about things they care about. That’s kind of the point of websites like the one we’re currently commenting on.
Reputable newspapers have practices in place to mitigate some sources of bias, such as printing no overt falsehoods, and trying to get quotations from opposing sources. Plenty of things still slip by. But it’s still better to have those practices than not.
There’s just as much bias, lies by omission, and so forth in right wing as left wing papers. But what you’ve argued is that there’s an ADDITIONAL thing in the right wing ones that isn’t there in the left: outright deliberate falsehoods. This means they’re worse. I can’t parse this any other way.
What we want from journalists and scientists is an honest and competent best-effort at getting to and reporting the truth. Their biases will screw them up sometimes, other times they will just flat make mistakes, but they should be making a serious attempt at learning the truth and conveying it accurately.
Management and funding sources and social pressure can all create an incentive for the journalists/scientists to fudge their answer or stop looking once they have what looks like a desired answer. But the best ones don’t like to do that, so mostly they just figure out what they’re not supposed to study and then go look into something else.
Perfect journalism is impossible. Good journalism is quite possible.
Good journalism almost certainly requires that journalists not operate in ideological bubbles, left or right. It also requires that they not rely too much on the “you can’t prove this wasn’t an honest mistake” when their mistakes keep coming so close to the provable-misconduct line and always in the direction preferred within their bubble. This isn’t the journalism we have, at CNN or Fox News, the Washington Post or the Washington Examiner, or as near as I can tell at their British counterparts. I don’t think I am out of line in demanding better, and withholding trust until I see better.
Bias can be thought of as the motivation to believe certain things. Since, as you say, “precisely 0 people […] are immune to cognitive biases about things they care about”, it ought not surprise you that if the motive to believe certain things is greater than the motive to be Tarskishly correct, then it’s great enough to color not just the object level content, but also even the practices to mitigate bias in that content.
How many times have you seen an article where “$opposingSide could not be reached for comment?” How hard do you think that journalist tried? Did they call ahead, schedule an interview for the following week, and sit down for half an hour or so like they did with the primary source? Or did they leave an email with a request for comment two hours before the story had go out the door? How do you know? How many times have you noticed the big sheet of laminated colored plastic between a newspaper’s news section and its opinions section, so that readers could not fail to notice they were venturing into non-factual claims? Or the unmissable switch from 10-pt Times-Roman font to 15-pt Comic Sans?
Suppose I give you two buckets of water. One has normal looking water. The other has bits of dirt visible in it. Which one are you more likely to pour through a filter and let sit with an iodine tablet for half an hour before you drink it?
What if you later find out both were pulled from a stream teeming with harmful microbes? Which bucket is more likely to have you sitting at the latrine an hour later with a case of the runs? The one you filtered because it was obviously dirty, or the one you went ahead and drank because it looked clean and you were really thirsty?
It’s about as impossible as good science. Which is to say, it’s as impossible as performing science in a way that never produces false results. Which is to say, as impossible as something that we don’t really think of as good science, but rather “impossibly perfect science”.
Good science, by contrast, still produces false results, but in the limit, results whose degree of falsity decreases. Good journalism can do the same. But that journalism is only good if the journalists are motivated to Tarskian levels of truth, more than to any other beliefs they might have.
And the best known way to do that is to make sure the journalist pool has people with diverse sets of prior beliefs, then let them do their thing, both normal reporting and bias mitigation, and then aggregate the whole thing using a trusted mediator, and expect results which might still be false, but in the limit, less false than any single source. Then you do that again the next day, and the next, so that that limit is more likely to work in your favor.
These days, that trusted mediator is going to have to be yourself, and even that doesn’t work unless you’re that devoted to Tarski, too.
No. Blatant lies are easier to disprove.
So… they should be easier to avoid printing.
I think it’s not better, because I think the point of a newspaper is TO be a reliable/credible source of information, not to NOT lie. Misdirection or lying still makes the newspaper less credible, and me less likely to take what it says seriously.
I am aware that this is not what most newspaper readers select for, but allow me my own preferences here 😛
I don’t know about your country, but in the UK, the left absolutely does not control the journalism schools. Journalists are heavily dominated by the privately educated. They’re upper-middle class (not to mention white and male). They went to school and university with the people who ended up being the politicians (and the bankers and the businessmen…). In fact they’re often the same people. Johnson was a journalist; Osborne is now an editor. They represent a rich, right-wing club and their journalism reflects that. There are a few exceptions, but only a few.
Also, unlike in the US, there is much less money on the left of British politics, which I think is a big contributor to the problem.
And don’t fall into the trap of thinking that this is “serious left-wing papers” vs. “right-wing tabloids”. The Times and the Telegraph are “serious” papers and the Mirror is not. I know there’s a popular idea around here that the left are the middle class elite and the right are the ordinary working people, and that might be true in the US, but I really want to push back against the idea that it’s true in the UK.
And since we’re talking about the UK papers, It’s only right and proper that this excerpt from Yes, Minister be linked to.
https://www.youtube.com/watch?v=DGscoaUWW2M
https://khn.org/news/domestic-violences-overlooked-damage-concussion-and-brain-injury/
This points at a serious problem but underestimates the scope. I’m going to use brain injury to cover both concussion and sub-concussive injuries.
The arti