Did you know: the world’s largest Hindu temple is in the city of Robbinsville, New Jersey.
Among Antarctica’s few rivers is the Alph, so named because it runs through caverns down to a sunless sea.
Norway joins Portugal in decriminalizing all recreational drugs. I look forward to years of biased studies exaggerating the effects of this on both sides.
More on monopolies of older orphan medications: the cost of periodic paralysis cure Daranide went from pocket change to $109,500/year.
A really detailed article on how Google is using computer vision technology to provide unprecedented level of detail in Google Maps. Also: automatic identification of urban interestingness, principles of effective cartography, and the nth-level foundational work for self-driving cars.
How would you like to have your legal case judged by a guy called Justice Force Crater? Bonus: he disappeared mysteriously and was never found.
Guillermo del Toro says he saw a real UFO and it was “horribly designed”.
Jacobite: AI Bias Doesn’t Mean What Journalists Say It Means. Lots of people I know have been playing a giant game of chicken hoping someone else would write this article first so they didn’t have to – and it looks like Chris Stucchio and Lisa Mahapatra lost.
To Unlock The Brain’s Mysteries, Puree It. By blending a brain in a way that preserves cell nuclei, you can see how many nuclei there are per cc of brain soup and so get a more accurate count of number of brain cells.
The forgotten history of the African-American community’s pro-eugenics movement.
Alice Maz recounts her days as a Minecraft simulated tycoon. Highly recommended.
Related: Eighty common scams in the Runescape MMORPG economy. Although apparently a lot of Runescape players are clueless twelve-year-olds, and saying “Give me 1000 coins and I’ll give you a cool item” and then not giving them the item makes you some kind of wildly successful criminal mastermind.
AI researcher Rodney Brooks makes specific dated predictions on the future of AI. Too bad he doesn’t give confidence levels and so there won’t be a fair way to judge him.
The brief postal correspondence between Charles Darwin and Karl Marx. Spoiler: Marx was an embarassing-level Darwin fanboy and sent him a copy of Das Kapital along with glowing praise. Darwin politely answered that Marx’s work was “in the long run sure to add to the happiness of Mankind” but never actually read it.
Highlights of Wikipedia’s List Of US Medical Associations include the American Radium Society, Flying Doctors of America, and World Doctors Orchestra. Also, which of these would it make you most nervous to learn your doctor wasn’t in – American Board Of Legal Medicine, Physicians Committee For Responsible Medicine, or Physicians For Life?
Sean Carroll on Twitter: “Does one eventually reach an age where one stops having the anxiety dream about not being allowed to graduate from high school because one signed up for a math class then never attended it?” Reposting here because I’d never heard anyone mention this as a common dream before, but I absolutely have it (though not necessarily math class). Any theories why this happens?
Does a higher minimum wage decrease criminal recidivism? (h/t Marginal Revolution)
The US flag is probably partly based on the flag of the British East India Company, which appealed to the colonists as an model of British colonies enjoying partial independence.
You’ve probably heard that memo writer James Damore has sued Google for discrimination against conservative white men. It seems like a complicated case: political discrimination is generally legal but might not be in California (see here), and discriminating against white men seems hard to distinguish from affirmative action and various societywide diversity campaigns universal enough that I assume someone would have noticed before now if they were illegal. Some people are suggesting it’s more of a publicity stunt (possibly externally funded like Peter Thiel’s support of Hulk Hogan?) to embarrass Google and raise awareness. Which it’s doing – read the the whole 161 page lawsuit here if you want a look at the salacious accusations, which one commenter summarized as “industry-wide blacklists, cash bonuses for condemning wrongthink, calls to summarily fire white men accused of bad behavior, calls for ‘unfair’ (the exact word used) treatment of white men, openly booing the presence of white men, employees using company mailing lists to plot violent antifa actions” etc. Related: this profile of Damore’s (female, Indian-American) attorney. Also: apparently Mencius Moldbug having lunch with a Google employee “triggered a silent alarm, alerting security personnel to escort him off the premises”. Also: a commenter suggests an inside story in which the Damore memo was allowed to blow up because of office politics among top Google leadership.
Bloomberg: The Most Awful Transit Center In America Could Get Unimaginably Worse. They’re talking about Penn Station, a “debacle” which “embodies a particular kind of American failure”. And the unimaginable worseness is that the tunnels under the Hudson are decaying and might need to be closed, which would throw New York’s transportation grid into chaos.
Also Bloomberg: Study: Lobbying Doesn’t Help Companies Or Their Shareholders. This is well within a large body of work finding that money doesn’t really matter in politics, but if true it means both that popular wisdom is so wrong we should be thrown into near-Cartesian doubt about everything, and that corporations are idiots and throw away money for no reason. I put this alongside the “medical care doesn’t improve health outcomes” papers in “well, either this is false or everything else is”. Still give it a 50-50 chance of being true, though.
For the past forty years, every time new works were about to come into the public domain, Congress altered extended copyright law to keep them private. Now with the growing power of open access movements, there doesn’t seem to be any political will for this and works from 1923 will finally enter the public domain next year, followed by one year’s worth of works every year thereafter. Biggest potential loser is Disney, since Mickey Mouse would become public domain in 2024. H/T MR.
Van Bavel, Feldman-Hall & Mende-Siedlecki’s paper on ethics includes (first paragraph) a description of how a real-life trolley problem happened in Los Angeles in 2003 – transportation officials chose to switch tracks, saving dozens of lives. H/T Siberian Fox.
A Medium article by some Google AI scientists and a professor critiques that “AI can analyze your face to tell if you’re gay or not” paper from a few months ago. They find that the AI was most likely just looking at a couple of non-physiological features – glasses, makeup, facial hair style, tanning. Then they show you can get pretty accurate gay/straight classifications just by doing this manually to an image database. [This link previously was more condemnatory, but commenters have pointed out the original study specifically admitted this might be true]
Cambridge University’s Center for the Study of Existential Risk has created a Civilization 5 mod that adds superintelligent AI dynamics to the endgame.
The newest addition to GiveWell’s top charities is No Lean Season, which gives poor rural Bangladeshis enough money to pay for a bus ticket to work in the city during the season without as much farm work.
Primatologists: Bonobos prefer jerks.
NYT: As Labor Pool Shrinks, Prison Time Is Less Of A Hiring Hurdle. Included as a sort of nod to market optimism, where if there’s enough demand all of these problems like discrimination against former convicts will solve themselves. Of course, it would be nice if we didn’t have to wait for economic booms.
In the early 1900s, Sears sold over 70,000 build your own house kits, including some for really impressive mansions.
New research suggests the Sicilian Mafia originated from a sudden increase in the demand for lemons after they were found to cure scurvy; the resulting need to control lemon theft created an entire new dimension of underworld economics. Insert your own “market for lemons” or “lemon-stealing whore” joke here.
Chris Dillow of Stumbling And Mumbling joins the Hereditarian Left.
New entry to the linking perception and cognition research program: acuity of color vision correlates highly with IQ. And of course Michael Woodley is using this to try to demonstrate dysgenic effects.
New study finds more evidence that small class size improves test scores. Especially interesting: in addition to improving them the normal way by students doing better, it improves them because when public school class sizes are smaller, high-SES parents are more likely to send their kids to public schools. But there’s an effect even beyond this.
Larry Sharpe is the new (black, raised in poverty) face of the Libertarian Party. Key insane quote: “In the short run, Republicans will break, we’ll get a bunch. In the long run, we’ll absorb the Democratic Party. The Democratic Party will go away in 30 years. It won’t exist. It’ll be the Libertarian Party.”
The homicide detective said: “Before you connect the dots, you gotta collect the dots.”
By way of adding to one’s dot collection, and by way of variation, I recommend this beauteous coding (distillation, articulation) from Bret & Eric Weinstein. It’s thinking outside the mass grave, assuming avoiding collapse is possible.
link text
One minor thought about the AI bias article: I don’t disagree but you have to think about what you are really measuring. For example, when you look at rates of re-offending, you’re talking about convictions, not crimes committed. So, for example, black people and white people use marijuana at about the same rates but black people are arrested and convicted for it much more often. So if the police and criminal justice system have a bias, then that will show up in the AI’s prediction rates of who is more likely to re-offend.
The AI isn’t biased, it’s totally accurate, but you could still be accidentally treating people worse simply because they’re already treated worse by society. Goes for other kinds of predictions too.
On that thing about acuity of color vision being highly correlated with IQ:
“This test evaluates colour acuity by having
the participants physically arrange a series of 85 caps, each of subtly different
hue, along a spectrum defined by two end caps (e.g. blue to green,
pink to purple etc.)”
I’ve taken an internet version of that test and as I was doing it I thought to myself that the test results would likely be highly g-loaded regardless of whether or not there was any difference in colour vision based on g.
To my eyes at least, the different colours varied not just along the spectrum but also in different ways (at least some were lighter or darker). You had to ignore the other variations and only sort the rectangles by the variation along the spectrum under consideration. I thought the instructions weren’t especially clear on this, so the first hurdle is to interpret the instructions correctly and not put, say, lighter rectangles next to lighter rectangles.
If you do correctly interpret the instructions, I thought the task of decomposing the variations into spectrum-aligned and non spectrum-aligned components in order to do the sorting was likely g-loaded. Also, sheer persistence is likely highly relevant and also g-loaded.
If you test something, it’s likely to come out with an IQ correlation since g is useful to doing well on tests, even if the test is supposed to test something else than g. In this case, I thought an IQ correlation was especially likely (even when doing the test – not a post hoc explanation when seeing this conclusion). So, this puts the conclusion of the study in doubt in my opinion.
I have a proposed explanation of why Lobbying Doesn’t Help Companies Or Their Shareholders.
The correlation that the study picked up (higher spending on lobbying doesn’t translate into higher profits) could be explained by assuming that
assumption 1) companies in businesses that are highly regulated do more lobbying than companies in businesses that are lightly regulated.
assumption 2) companies that sell a large portion of their output to the government (eg, military suppliers) do more lobbying than companies that don’t sell a disproportionate amount of their output to the government
and, of course,
assumption 3) companies that are highly regulated are less profitable
assumption 4) companies that sell a large proportion of their product to the government are less profitable.
It seems highly probably to me that assumption 4 is true. If it wasn’t, the press would have a field day exposing their unusually high rates of profits. It also seems highly probably, indeed, probably inevitable that assumptions 1 and 2 are true.
Assumption 3 is harder to gauge, without any research, but, at least, it doesn’t seem unlikely. Companies that are highly regulated, one way or another, at least at the federal level include pharmaceuticals, banks, autos, and energy. Probably others I’m not thinking of, but, it seems hard to generalize about their profitability vs not highly regulated companies, so without any data, I’d have to say, not proven, but assumptions 2 and 4 might be enough to show, statistically that higher lobbying is not correlated with higher profits.
The way to eliminate the confounders of course, is to pick companies that are in the same business, (and probably you’d need to pick ones that are about the same size, to eliminate that confounder) and see if you can see any meaningful relationship between amount of lobbying and profitability between the ‘paired’ companies: since lobbying isn’t a random action, it is completely self selected, you can’t assume that a random group of companies that do lobbying is ‘equivalent’ to a random group of companies that don’t.
The Jacobite magazine article was disingenuous enough that I felt the need to log in and post a comment into the void complaining about it. It’s a lot of verbiage and mystification pretending not to understand some simple ideas, likely in order to hide the fact that the authors really just support racist policies on their own racist merits.
Consider the Google page cited. They have read the paper, even going over graphs with a ruler to measure ticks (or maybe they used an old set of calipers they had lying around).
Yet they completely ignore the actual problem that the paper is trying to solve: do the erroneous predictions made by an imperfect model disproportionately harm one group? If you have a perfectly accurate model, the framework laid out in that paper would spit it right back out and tell you to use it unmodified. If your model makes errors, on the other hand, the Google construction ensures that those errors are distributed evenly across the races. The bank’s profits are reduced, but it seems convincing that the bank ought to bear the burden for their own bad model, rather than members of a particular race.
As a society we decided to make racial profiling illegal in a lot of important areas for many reasons. A very important one is a concern for individual rights. In the realm of populations it might be “rational” to discriminate against certain races, but that’s going to result in many individual members of those races being treated very unfairly, compared to how they would have been treated if judged by their true underlying “merit”. This is precisely the problem that the Google construction is concerned with, especially in the case of “accidental” redlining.
The Jacobite piece has lots of graphs and charts and is quite long, I guess to seem like it’s explaining something subtle or complicated. But they don’t actually engage with any of the subtleties of ML fairness research, and its elaborate schemes to balance fairness and accuracy while remaining “blind” to certain properties of the data — indeed they seem to willfully misunderstand it. If you explicitly give race as a feature to your model, none of these complicated constructions or measurements of fairness matter at all.
And that’s what they want to do. Their actual argument, boiled down, is quite simple, laid out in the last two paragraphs. They think racial profiling is good and should be encouraged!
EDIT:
I should add, the COMPAS algorithm had exactly the sort of problem discussed in the Google paper. The big thing Pro Publica found wasn’t the fact that COMPAS on average gave black people a higher recidivism score. It was that, when COMPAS made mistakes, its mistakes with white people tended to favor them, while its mistakes with black people tended to go against them.
The Google paper is showing something different; their estimator (the credit score) is already biased; note that almost all oranges above the 55 cutoff are repayers, which is emphatically not true for blue.
Incidentally, for real credit scores, blacks have poorer repayment performance than predicted by the credit score. However, it still might be true that the false negative rate (loans denied to those who would repay) at a constant cutoff is higher for blacks than whites. This would be true for an unbiased predictor given a true difference in repayment rates, and the bias towards blacks in the predictor may not be enough to compensate for it. The proponents of this sort of “fairness” would suggest we set thresholds to bias things even further against whites, so a white person needs to get a higher credit score to get a loan, despite being more likely to repay at any given credit score.
The article makes no claims whatsoever about what algorithms should do. The point of the article is that journalists should not mislead people into believing that this “bias” is in any way factually incorrect.
If you want algorithms to make wrong decisions for fairness, advocate for it openly and honestly, accounting for it’s tradeoffs. Say “I want 9% more murders and rapes to ensure black and white false positive rates are equal.” And similarly, say “I’d like whites with the same criminal history as blacks to stay in jail while the blacks are released, in order to ensure equal false positive rates”.
If you want to argue that those things are fair, make that argument. My article is agnostic on that point. But don’t make that argument by pretending those tradeoffs don’t exist.
This is blatantly wrong. The google construction is concerned with treating individuals in an explicitly race-conscious manner in order to ensure similar false positive rates.
It’s very true that I’m not engaging in the subtleties. That’s because my article is making a very different point: “algorithmic fairness will cause rapes and murders, and journalists regularly ignore this cost while misleading you into thinking the problem is inaccuracy.”
I am against racial profiling even in cases when it would be effective. Both as a matter of justice for individuals and because of the larger-scale problems that result from widespread profiling.
It seems pretty clear that you’d like race to be an explicit input to these models, and more generally that you think it’s legitimate for organizations to discriminate by race, whatever fig leaf of “agnosticism” you might try to claim. It would have been a much shorter and more readable article if you’d just said this.
The point of an algorithm like this is to profile people based on real characteristics which matter as opposed to ones that don’t like race. So are you against any type of profiling ever? This would have been a much shorter and more readable comment if you’d just said this.
Ok. How many people being murdered/raped/beaten is it worth to you that no one ever profile? That’s the core ethical issue, no matter how much you dodge it and journalists pretend it isn’t there.
In any case, this is irrelevant since COMPAS and most existing lending algorithms do not profile. They take as inputs nothing but individual characteristics. Now some of these inputs – e.g. committing multiple violent crimes – happen to be correlated with race, and the resulting decisions are often correlated with what racial profiling might do. But that’s fundamentally NOT the same thing as profiling.
(If you look at COMPAS, you discover that the biggest predictor by far is criminal history.)
If any individual wants to avoid being “profiled” in this way, he merely needs to behave like a member of the other group (e.g., commit fewer violent crimes).
I don’t see much point responding to strange allegations about what I secretly believe. If you want to ignore the core ethical issues by painting me and my coauthor as secret white nationalists, feel free.
In one subthread, another person noted that Steve Yegge left Google. He has an interesting write-up about it. Some quotes:
Note that his post is long, but most of it is him gushing about how Grab will revolutionize the world, so you can just stop reading when he (mostly) stops talking about Google and starts talking about Grab.
> “medical care doesn’t improve health outcome”
Any cite for this? It seems clearly ridiculous (ohwhatisthis? makes one pretty unarguable rebuttal – type I diabetics who would die in weeks without modern medical care.) There’s no chance that this is _exactly_ balanced out by bad outcomes, so I have to guess that the real statement is “medical healthcare, while saving many lives and improving lots of outcomes, also makes yet more lives worse so is a net negative on average” -? Google search on this shows many citations suggesting that, in the US context, “more”/”expensive”/”expanded” healthcare is not additionally valuable over some baseline, which is an entirely different claim in at least two dimensions.
So I want to say this is self-evidently nonsense, but to be fair and honest I’d like to read someone make the argument for the claim: where could I look?
I assume he’s referring to the RAND Health Insurance Experiment and the Oregon Medicaid Health Experiment. Summarizing the results as “medical care doesn’t improve health outcomes” is not exactly right, but close enough.
> Summarizing the results as “medical care doesn’t improve health outcomes” is not exactly right, but close enough.
Fine, I see my error though don’t feel too stupid about it.
You seem to be right, so long as it is understood that ‘healthcare’ is a term of art here;
if we are talking about the papers you assume Scott is alluding to it has to be read as ‘certain marginal changes in the quantity and method of medical care delivery, in the USA specifically, and only looking at changes against the USA baseline’.
Insulin for diabetics (to make the point less arguable, for type I diabetics who quickly die without it for whom there is no other mitigation possible) would generally not be healthcare in this sense? Is this right? (I guess the other option is: yes it’s health-care even in this sense, and should be included, but the same policy changes that give more diabetics insulin will also cause entirely other people to get entirely unrelated treatments that harm them?)
@ alef
Bingo. One natural way to parse the results of the Rand study and the Oregon study is to figure that some heathcare has positive effects and some has negative effects and on average it’s roughly a wash. So if you give people “more access to healthcare” some will get some test/treatments that make them better and some will get some tests/treatments that make them worse.
Think about going to see a doctor or going to a hospital in the 16th century, or the 17th, or even the 18th. Doctors had stuff even back then that they knew how to treat successfully, but they didn’t know what they didn’t know and human bodies are complicated and a lot of the tests and treatments made people sicker in ways that weren’t being well measured, so on balance it was pretty dangerous to see a doctor or visit a hospital and follow the (dubious) medical advice you’d get if you went to one. Yet people still used doctors and went to hospitals, because conventional wisdom said that was the thing to do and because doing something feels better than doing nothing, even when doing something is expensive and produces a huge and completely avoidable risk of iatrogenic infection that dwarfs the risk of whatever problem you were hoping to treat.
Surely you’d grant that visiting a hospital or getting surgery was a scary proposition prior to the routine use of anesthesia and antibiotics, right? And certainly we know more today than we did back then, but the question is: exactly when did getting medical care become such a solid net-positive value proposition that one could confidently claim giving people more healthcare access is good for their health and giving them less is bad for their health…has that even happened yet?
Maybe it hasn’t.
That’s so implausible and would seem to require extraordinary evidence… as Scott said ““well, either this is false or everything else is”.
But maybe it’s true; let’s run with it. There are treatments that seem fairly unequivocally to be good on balance (the insulin thing again, I bet we could make quite a list.) So there must that are significantly bad on average, not just neutral, but bad and common enough to cancel out the good. Like (entirely speculatively) risk of infection by even visiting a hospital unless your condition is really bad regardless.
So then we should be able to make a list: if you have access to any modern medical care we know about, here are the ones definitely to use if needed, here are the ones to avoid no matter what the apparent fit to your condition, and here are some more complex cases where there’s a bit of a case-specific flow chart for each. And I guess the premise implies
that this wouldn’t look much like the current medical establishments conventional wisdom.
And, if you were rational enough to stick with this list even when it said to
do ‘nothing’ contrary to conventional wisdom, you’d be materially better off on average.
Is something like this available from a credible source (e.g. one the Rationlist community would not scoff at)? Would the RAND study help suggest such a list?
Cost savings and cost effectiveness of clinical preventive care.
The problem with questions of this sort is that they fall into the construct of “would you do anything to live another day or stay at/return to your full health?” And the answer for so much of the population is, day to day, no, else we would not have so many smokers, sedentary people, drug users, BASE jumpers, and people with 20+ sexual partners. (Actually, IIRC, it’s at 5 sexual partners where health risks start to really rise.)
So it is *irrational* to expect the population as a whole to say “yes, we will pay anything for medical care, *even when it doesn’t have a net quality of life advantage, or a net cost advantage*.” However, humans gotta human, and universal health care (at least in the advertising stage) tends to neatly disguise the fact that anyone is paying for anything.
@alef:
Until quite recently (and probably still today) a fair bit of “preventative medicine” has been bad for you, particularly cancer screening. There have been major efforts to reduce the recommended amounts of screening for colon cancer and breast cancer and it gradually has been cut back and getting better, but we almost certainly still do too much, in part because groups of doctors have a personal and financial interest in over-treating. Here’s a relevant debate – should we look at all-cause or specific-cause mortality in deciding whether a treatment “works”. (I pick the former, a standard which suggests far too many routine colonoscopies and mammograms are being done)
I believe John C. Goodman’s book Priceless: Curing the Healthcare Crisis had some relevant suggestions. IIRC, some standard advice includes:
– Avoid regular checkups. Only see a doctor if there’s a specific issue you’d be willing to spend your own money to treat – don’t see one just because it’s “free” or “cheap” or “convenient” or “routine”.
– push back and try to avoid “routine tests” unless the test result is actually important
– favor OLDER treatments. Drugs or procedures that have been around a few decades are much more reliable than brand new drugs and new procedures which are especially prone to medical reversal.
On over-testing men: why I won’t get a colonoscopy (SciAm)
@alef
A lot more of modern medicine is dangerous placebos than it might first appear. For example, no one has to prove a surgery does what it’s supposed to in a double blind clinical trial like they do with pills.
A few hazy recollections of examples:
It was thought that some back pain was caused by bulging disks in the spine because people with back pain often had these bulging disks. So surgeons would fuse together vertebrae to fix it. Turns out if you do an MRI of people with no back pain they have bulging disks roughly as often. Surgery is pretty risky stuff, that’s a lot of negative care right there.
Similarly, doctors will do surgery to repair tears of the ACL. When someone got around to doing a double blind trial with sham surgeries, for the vast majority of cases, surgery did no better than physical therapy. This isn’t as risky as back surgery but still.
I would also recommend reading Ioannidis on the problems with medical trials.
Thanks Glen and quanta413 in particular, your pointers are sort of what I had in mind. I’ll read more into them. Perhaps the Goodman book (well, solely based on your description) is pretty much what I was thinking of.
A whole lot of what people think as “obviously good for you” has no evidence or even negative evidence for beneficial results.
Some of them are, but if you haven’t studied the issue, you will be really surprised at which ones are and aren’t supported.
The public is also easy to freak out about this. There was the move to reduce breast cancer screenings about 10 years ago, and an absolute shitstorm resulted from people thinking it was out to kill women.
@alef:
Here’s a relevant 6-minute Adam Ruins Everything segment: The Little Known Truth About Mammograms.
Upshot: conventional wisdom has long said frequent screening is great because it’ll catch cancer early. However, frequent screening increases both the risk of false positives and the risk of true positives that would be better left untreated. So the more testing you do, the more likely you’ll end up giving people stress and chemo or surgery they didn’t need (say, because their cancer is too slow to matter or wasn’t really cancer). Giving people chemo and surgery constitutes a substantial risk of illness or death, so you need to carefully balance the benefits versus the costs of screening that might lead to that result.
Insulin for diabetics (to make the point less arguable, for type I diabetics who quickly die without it for whom there is no other mitigation possible) would generally not be healthcare in this sense? Is this right?
Insulin is available over the counter in many places in the US. Not – to be sure – always the most up to date long acting varieties, but the older generic brands have been OTC for some time.
Likewise, ambulances still deliver traffic
accidentcollision(*) victims to the hospital without regard for insurance status. So *access* to life saving care isn’t in short supply. (It’s also important to remember that even in the bad old days with the ERs clogged with under-insured indigents, less than 5% of US health care expenses are in the ER.)A hundred years back, though, you couldn’t get insulin *at all.* And 40 years ago, it was all purified animal extract. Now even GMO insulin is super cheap, effective, and safe.
IMO, we shouldn’t be trying to make sure that everyone is provided the topline(**) care of the day at no cost, but instead work to making much of current care as cheap as possible, and work towards perfecting better care, even if it takes fifty or a hundred years to become “available” (read: dirt cheap) to everyone.
(*)”Because ‘accident’ implies that no one is at fault.”
(**) I do not want to live in a world where the rich are somehow prevented from obtaining ‘better’ care than me. The level of government control that this implies is…disagreeable to me.
A better summary is probably “changes in the delivery of healthcare services to a population from ‘current level of care in ERs and irregular doc visits’ to [anything else] don’t change the health outcomes of that population.” (When the population is in the USA.)
It’s not that the different types of health care delivery don’t have different outcomes among the different populations that they serve (working people with ‘Cadillac’ plans vs Mediaid recipients) it’s that those populations are not equal to each other, and differences in populations change health outcomes in ways that overwash the delivery differences.
This is NOT the answer most policy wonks and healthcare administrators want to hear, on top of it being hard to test. As noted, the Oregon study is the best we have right now.
Megan McArdle has been writing columns on this topic for many years – here is another perspective.
TL, DR: The kind of health care that people argue about whether everyone should get as an entitlement, seems to make so little difference as to get lost in the noise when we try to study it. The kind of health care that is unambiguously beneficial, is also pretty much universal in the modern western world so that there’s no good control group we can use to study whether it does any good (if you think “does insulin really help diabetics” needs studying).
Health insurance is a financial product, not a medical one.
In politics, health insurance is conflated with health care. This allows us to be surprised when we provide health care to more people and observe no improvement in measured health outcomes.
https://www.cato-unbound.org/2007/09/10/robin-hanson/cut-medicine-half
Thank you for the link, lots of good (additional) links in there.
One quote: Briefly, the idea is that our ancestors showed loyalty by taking care of sick allies, and that, for such signals, how much one spends matters more than how effective is the care, and commonly-observed clues of quality matter more than private clues.
I am reminded of various debates over the value of time spent laboring over something – such as the aesthetic sense that some thing “handmade” is more valuable (more desirable) than a similar product made by a production line, even if the second example was sturdier, lasted longer, etc. Or, speaking of a class of students, that the one who studied more hours deserved as good of a grade as the whiz kid who got 100% on all the tests.
This is a base-line disagreement on what “fair” and “equal” are, I think.
(Also related: various lines of debate admonishing Catholic (or broader Christian) faithful for not supporting Obamacare on the grounds that this clearly breaks with the traditional corporal Works of Mercy. To me, this has had a sense of the ‘gotcha’ and motivated reasoning – while working for people to be in better health is admirable, the instruction is to be physically and emotionally present with people in misery, so as to comfort them. There is an examination to be made here, at least partly on utilitarian grounds, that weighs the donated time of comforting those with chronic, uncureable disease and the utility of that action against extra work (at a job we love!) to earn dollars to find cures for, oh, COPD.)
(To be clear, I don’t think there is much doubt on how God weighs it – spend your working hours on work, live within your means so as to have resources to donate (and to avoid squandering the charity of others to support you) and dedicate a portion of time(*) to the practice of works of mercy.)
(*) Not of the ‘extra’ time after one has done everything else, but as a dedicated part of your day, just as there is time for sleep, eating, doing the washing, and brushing your teeth.
Well, for Catholics this is complicated by the fact that the US bishops have called for some form of universal healthcare for years. But of course it’s one thing to say “your bishops urge you to support universal healthcare” and quite another to say “your bishops urge you to support Obamacare, as it is presently written.”
But of course it’s one thing to say “your bishops urge you to support universal healthcare” and quite another to say “your bishops urge you to support Obamacare, as it is presently written.”
The devil, as they say, is in the details.
(Plus, the difference between a locally-run parish program (likely resource constrained to some degree) which has the ability to exert pressure on…persons with a deficit of industry and integrity,and a national-level program (with tons of money) with layers of red tape and provisions for “equal access to all”, is non-trivial.
There are opportunities for corruption in both, but the graft distribution is surely larger in the second, and the Catholic Church, to her sorrow, is made up of humans.
The fact that suing nuns was considered an integral part of Obamacare probably clouded the picture for Catholics even more.
There are a lot of weird social science claims that don’t make much sense….because it turns out, they use weird definitions of uncommonly used words, or do things like “Marginal Care” (I caught some economist making that mistake. Well sure, of course medical care on the margins is questionable. Its on the margins. ). A lot of social science debates are really “made up” disguised language problem debates,similar to debates of free will vs determinism…and when looking at the top level answers, they end up not even wrong without a *great* deal of clarification.
There *are* subsets of medical care where this claim appears to be *more true*, at least in a way that makes sense….but i’m not going to go into them right now(private messages are fine)
Yeah, or they just look exclusively at the up- or downsides some behavior and then declare that it is good/bad.
Wouldn’t most medical care be marginal, though (i.e. you’re not already taking it for chronic conditions)?
I don’t know this area well, but I’d interpret “marginal care” to mean the care that seemed least urgent/important.
Like, suppose you had to pay out of pocket at list price for medical care. You’d still go to the doctor the day you had crushing chest pains, or the day you woke up numb on your left side, or if you broke your leg.
Now, imagine you got some kind of insurance, but it wasn’t very good–you’d still have to pay out of pocket a lot, but maybe not the crazy list prices that doctors and hospitals charge. You’d probably go more often–maybe going for that annoying chronic cough that might be nothing or might need treatment. Or for the respiratory infection that maybe would go away on its own but might be pneumonia.
Now imagine that your insurance got still better, so you could afford to go to the doctor more often. Maybe now you’d show up for less urgent stuff, like an annoying sinus infection or a funny rash on your leg.
As you keep going along that progression, what you’d expect is that the medical care would help you less and less–going to the hospital when you have symptoms of a heart attack or stroke is a really good idea, even if it’s going to cost you a bunch of money. Going to the doctor when you’re really ill, or for chronic stuff that doesn’t seem to be going away, is probably a pretty good idea too, but it’s got less payoff. Going for a sinus infection probably isn’t all that great an idea unless it’s really causing you problems or just won’t go away. And so on.
Each time you go to a doctor, there’s some risk–he may screw up and do something that causes more harm than good, he may prescribe you medicine that interacts badly with you or something else you take or the pharmacy may screw up and give you the wrong dose, he may order some test and someone screws up the test and hurts you, you may catch something worse than you started with in his waiting room, he may do some treatment and somehow it goes wrong and hurts you, etc. I can think of several cases I know personally where some medical person screwed up and I or someone I know had problems as a result–ranging from screw-ups that didn’t cause anything worse than a bad night or two (this happened to me once that I know of, and at least once to my wife), to one that basically wrecked the patient’s life and left her an invalid (from a rare bad drug reaction).
If you only go to the doctor for really serious stuff, then there’s a good chance that the risk/reward tradeoff is positive[1]–they may screw up when treating you for your heart attack, but since you’ll probably die without that treatment, the risk is worth taking. As you go down that list to less and less important stuff, I think the risk/reward tradeoff becomes less and less favorable. At the end of that sequence, where you go to the doctor every time you have a cold, it’s likely that you’re making your health worse–you’re likely to catch something else at the doctor’s office, and he can’t do anything for your cold anyway. Maybe he’ll give you antibiotics to make you go away, and you’ll end up having a reaction to them or something.
Figuring out where you should stop going to the doctor is not obvious, at least to me. But I think that’s a common proposed explanation for the RAND and Oregon studies where they didn’t see big short-term health improvements from giving people insurance–the really sick people were going to the doctor even if they had to pay out-of-pocket, and the insurance probably mostly just added marginal care where the risk was close to the reward.
[1] Doctors also think about this stuff, so they’ll do riskier stuff to treat a heart attack or stroke than they will to treat a sinus infection.
I think there’s an important distinction to be made about terms of art.
Sometimes, there’s an unfamiliar term (or a repurposed familiar term) that’s being used to label some concept you already have. Like, if I decide to refer to drug dealers as zwilnicks, you’ll need to figure out the new term I’m using, but then you’ll be able to understand me perfectly, because you already have the concept of “drug dealer” in your head.
Other times, there’s an unfamiliar term (or a repurposed familar term) that’s being used to label a concept or bundle of ideas and assumptions that you don’t already have. Like, if I start talking about reverse transcriptase or a branch predictor, even when you get to know what word I’m using, what I say won’t make any sense until you have the concept they’re pointing to (and a bunch of surrounding stuff that makes that concept make sense).
And then, there’s the choice of words to use for some concept, done for marketing/propaganda/point-scoring purposes.
“medical care doesn’t improve health outcomes”
Don’t all of those studies have very very constrained settings…and any commentary that’s more honest makes a lot more sense? Like certain types of chronic care, or control for having a base well known level of care?
Since its pretty easily wrong in cases like diabetes.
Re: Larry Sharpe
I knew Gary Johnson was over and hung up any remaining hope when his biggest media moment of 2016 was “What is Aleppo?” At no time did he take advantage or seem aware of the record low approval ratings of the other candidates in what should have been a huge chance for the Libertarian Party.
Based on the linked interview and no other knowledge, Sharpe seems to be the kind of leader the LP needs, providing direction while still tolerating chaos. He is ambitious long-term while phrasing his ambition in reasonable short-term goals like winning local elections. He recognizes the competing goals of ideological purity/avoiding tribalism and converting new members. He puts focus on communication and winning, both things the LP has trouble with. I hope he can effectively balance it all.
My biggest concern is the “absorb the Democratic Party” comment. With the relative size differences, it is much more likely to happen the other way around. He is very optimistic about changing people’s minds, and as much as I would like to see an ascendant LP as a new middle consensus, I doubt it will happen.
A close second concern is never turning away crazies. He seems to understand that radicals make people afraid, but doesn’t seem to understand that it won’t matter that he himself is not a radical. In a hostile media environment, the mere presence of a shared label with radicals will be enough to poison the well for most people. It will, however, filter for more crazies and before you know it you’re in a vicious cycle of unpopularity forever outside the Overton Window.
If he really believes cultural change is the answer, he should try to emulate Andrew Breitbart by starting a media company.
The 2016 election showed just how tough a slog it’s going to be for the LP. Gary Johnson may have been the most electable candidate they’ve ever had, and with a perfect storm of incompetency from both D’s and R’s, he got the most votes (and percentage of votes) by far of any libertarian candidate ever. However, the two parties have so severely stacked the deck against third parties, that Johnson couldn’t even get a sniff of the debates; meanwhile, the media seem to have some sort of vested interest in the two-party system, as they never gave Johnson a remotely fair shake. FPTP at all levels combined with a two-party system that dictates the media exposure rules means that third parties have pretty much zero chance at the national level, and only marginally better chances at lower levels. I don’t think the LP is going to get anywhere until reforms happen that both major parties are against.
I can agree with this but, why did Perot get into the debate in 1992?
Because Ross Perot didn’t run as a third-party candidate in 1992.
He did in 1996, and wasn’t invited to the debates then. Perot/1992 ran as a billionaire celebrity, and that invokes a whole different set of unwritten rules about who pays attention to whom. If the Libertarian Party finds a billionaire celebrity to run in their name, that person will be invited to the debates and get lots of media exposure, but it’s not clear how much of this would help the Libertarian Party.
So, how’s our track record of having billionaire celebrities as president, so far?
The commission on Presidential Debates was founded after 1992 to make sure this didn’t happen again.Nope. I’m wrong.
I think the 2016 election showed us that being inoffensive and “electable” is not a winning strategy. Gary Johnson would have done better if he had made more noise to get the message out and made some people mad to show where he stood.
Forget highest percentage of votes for any Libertarian. Most of those people didn’t know who he was and simply didn’t want to check D or R.
You are absolutely right that lack of media exposure chokes promising third parties. but this isn’t the 1960s. The internet is more powerful than ever. We have already seen it used to devastating effect.
Pretty much what Urstoff said, with the addition that the LP doesn’t seem to be particularly well organized or disciplined (which, well, is about what you’d expect). I’m very interested in things like ballot access and electoral process reform, but even with them I don’t expect the LP to do much better in its current incarnation.
Mind you, I’ll still tend to vote for them and other third parties unless a decent Republican or Democrat comes along (HAHAHAHAHA), but I hold out no hope of that being more than an unremarked and meaningless protest vote.
To be fair, the Republican and Democratic parties don’t seem to be particularly well run either.
The actual problem is the vast majority of the libertarians positions are terribly unpopular and all the popular ones have already been grabbed by people with other more popular policy positions.
The vast majority of people may not want Sweden’s welfare state, but they like ours.
The vast majority of people may not want to invade Iran, but they like drones.
The vast majority of people may be upset about dumb regulations, but they don’t want to sue a massive corporation to get them to stop dumping toxic waste.
Etcetera, etcetera.
That’s just status quo bias. People don’t like any of these things on their own merits. They like what is familiar and available.
People do not vote to elect policy. They vote to elect people. That’s why campaigns are 5% policy critique and 95% character assassination. Charisma is the driving force, and passionate speeches with good slogans dominate any other factor for most voters.
It also doesn’t help that most libertarian positions I’ve seen are extreme. It would help a lot if they started picking low hanging fruits and focus on making undisputed changes rather than fight over minimum wage.
The biggest asset of libertarianism and by far the most ignored is that it can, potentially, become synonym with “common sense” in a way no other ideology can, because it is fundamentally right. But to do that it has to be very mindful of things like Overton windows for acceptable policy changes.
The other side has done well with the opposite strategy: The push for a $15.00 minimum wage seems both extreme and so-far successful.
Anyway, there are no low-hanging fruits. Every rule, tax, law, and regulation has a constituency who will fight for it tooth and claw. Might as well go for the ones which make a difference.
My fear when it comes to minimum wage increases is that 90% of the cost is opportunity cost, and thus very very hard to measure. Stuff like keeping the young or disabled or otherwise inefficient out of the work pool, and stopping them from getting experience. Or not creating a whole class of businesses based on low effort labor.
> Might as well go for the ones which make a difference.
Yeap, that’s what I mean. Going for the ones where the effect is easier to measure, so it can build a reputation. Minimum wage is the opposite of that – even if libertarians are right and they succeed in lowering it, there’s no PR to gain because it won’t be easy to tell they were right.
Re: Copyrights
I cheered internally when I heard this news. Copyright and patent terms are already far beyond what is needed to encourage new works. The purpose of these laws was originally and should be to encourage creation that ultimately joins a robust public domain, and the public domain is severely limited if it is perpetually 70+ years behind.
Many highly significant cultural works remain under copyright protection. Does anyone here know of a charity or other organization that lobbies for copyright term reduction and/or buys copyrights for the sole purpose of releasing to the public? There doesn’t seem to be much in this sphere.
This might be my low IQ talking, but I found the Minecraft article totally boring. It’s actually one of the reasons I don’t play major MMOs anymore: the only way to advance into the endgame is to sit there and micromanage the local economy. I want to kill bosses, crush other players (assuming PvP is available), and participate in exciting stories. The tale of “how I cornered the wool market through double arbitrage” is not an exciting story. It’s just a bunch of Excel spreadsheets.
This is also the reason why I prefer to read about massive EVE Online battles, as opposed to playing the game myself. It just feels like a 9-to-5 job when you do it in person.
In a way, the account of EVE reminded me of the SCA, and in particular Pennsic. Both are settings within which different people are playing different games.
I really enjoyed getting fabulously wealthy in World of Warcraft off the auction house and crafting. I was also in a high end raiding guild (top 20 US on wowprogress) but doing that felt like doing something that everyone else was doing, only better. Figuring out the economy and exploiting it to amass ludicrous amounts of gold was like playing my own game. Also, it helped the whole “killing bosses and participating in exciting stories” thing. I bankrolled my guild, routinely dropping hundreds of thousands of gold into the guild bank so no one had to pay for repairs, potions/cauldrons/feasts, BoE epics when new content dropped, etc. And the best part was besides the guild master, no one knew it was me doing it. I did all of my auctioning on alts because I didn’t want anyone to suspect how much gold I had and bug me for money.
I loved the Minecraft article. Loved it. Wish she had more articles up – she hints that this wasn’t even her greatest triumph in a MMORPG. I’m dying for an article about whatever else she was playing.
Anyone else have any great articles on video games they’d like to post? I can think of a bunch, but the only one I could find – and really, one of my favorites – is this description of EVE Online and one of its most famous battles: EVE Online: The Most Thrilling Boring Game in the Universe.
Anyone have any other recommendations?
Apparently there is going to be, maybe, a huge EVE battle 3x the size of that old 90 minutes from this comment.
https://www.reddit.com/r/gaming/comments/7sa25p/after_15_years_eve_online_is_having_its_first/
Might not be quite what you’re looking for, but Boatmurdered (Dorf Fortress) and From Norse to Horse (Crusader Kings) both stand out in my memory
These look great – thanks! Looking forward to reading through them. The Norse to Horse looks very funny.
And thanks to Edward for the reddit link – I hope it happens and gets written up. It probably points to something wrong with the culture or with me, but I can really get into reading stories about events on EVE Online, a video game that I’ve never even SEEN played.
I downloaded Dwarf Fortress once, because someone told me that if I liked NetHack (I LOVE NetHack), I’d like Dwarf Fortress. But Jesus, I couldn’t make sense of it. Is it worth another look?
I’ve tried Dwarf Fortress and after slamming my brain against it for a couple weeks decided I enjoy reading about it much more than actually playing it.
This was before I got into Paradox games, though, so the layers upon layers of madness might click more easily now – but for me personally I doubt it’s worth the time competition with other games.
Another thank you to Edward Scizorhands and Gobbobobble for sharing these — I’m really enjoying reading them 🙂
Dwarf Fortress is more like SimCity with Nethack’s graphics, so don’t go in expecting roguelike action. You don’t command the dwarves directly, you order them to dig out an area or build a workshop and a dwarf with the relevant skills will get around to it at some point.
But it’s a pretty good SimCity, a Simcity with goblin invasions and elaborate deathtraps menacing with steel spikes. It’s worth a second look, if you don’t mind keeping the wiki open in another tab while you play.
Boatmurdered is classic! I’m glad Lasagna asked, because I was thinking the same thing after reading the Minecraft one.
Oh, also, for a similar Peek Behind The Veil aspect that the Minecraft article had, Sullla’s Civilization writeups are really good. This one I believe is where I learned about abusing the Demographics screen and score tracker to figure out what your opponents are up to (though I read approximately all his AARs in one binge so they blur together. This one is fun at least). Also: dotmaps and EVE-style spreadsheet-fu!
I’m engaged in a long-running multiplayer game against Sullla now and it’s the most intimidating gaming experience I’ve ever had. I have full access to his website and his old civ 6 reports and I still can’t do what he does.
Nice! I liked what I’ve read so far of the Civ IV write-up, but honestly, I barely remember that game now, so it’s a little confusing.
I don’t see any write-up on your link – is it not for the general public?
The game is in progress, so each team of 2 players is recording their actions for public consumption in the individual forum threads. Once the game ends in a few months, Sullla will probably use the threads as a basis for his write-up.
It’s open to the public, the only real rule is that players can’t read each others’ threads or the general lurker thread, and lurkers are not to share information with the players.
That article on google maps is quite interesting, but what I’d like to know is how does google maps know which aisle in Best Buy has tablets. Satellites don’t have X-ray vision, do they?
(Zoom in to see what I mean.)
That article and the Damore complaint make me wish I could avoid Google in every single respect possible.
You may not be interested in Google, but Google is interested in you!
Oh, that one’s easy. Best Buy tells them.
To go into detail, Google has a program where you can give them your store layout and they’ll set it up in maps. The local stadium has this too, showing the various seating sections.
https://www.google.com/maps/about/partners/indoormaps/
And as an aside, this is an interesting illustration of the problem with “just Google it”.
I found this information out by Googling “why does google maps show the layout of Best Buy”. If JulieK had done that rather than asking us, they would likely have found the same answer I did in short order.
But I had no idea that Google Maps did this, and without JulieK making this post, I might never have known. So in asking the question, they told us all that it’s an interesting question to ask, and ended up creating a thread with an interesting question and an answer rolled into one.
I’m fairly convinced that “the Left” is a dead man walking at this point. I say this in the perspective of the gradual shift leftward since the great revolutions, which were the first, classical liberal forms of the Left.
One might say, the Left has been like a snake shedding its skin ever-Leftward, with the older, classical liberal forms of the Left being serially othered as “the Right” and put in the same bin with the genuine Old Right (which was what liberalism initially revolted against – i.e. the divine right of kings, aristocracy, stable hierarchical social order, etc.)
It’s this “Left” in the sense of the ever-Leftward shifting vanguard that’s dead – it long overshot the mark and has ended up painting itself into a corner. Or as we say colloquially in the UK, it’s disappeared up its own arsehole.
I think things are going to settle back to more or less classical liberalism/libertarianism as the Left, which will be characterized by rationalism and atheism, and that will stand across the aisle from a more traditionalist, religiously-oriented, populist Right, but with both “sides” united in respect for Constitutionalism.
IOW, the whole spectrum will have shifted back Rightward a bit, and the “socialistic” elements will hive off to the Right (i.e. national socialism). The “sweet spot” was overshot, the “sweet spot” was the transition between Old Right and the new classical liberalism, that’s the “center”, and you can move Rightward and Leftward from there, but not too far.
Socialism seemed to come from the Left, but actually what it really was, was something of a nostalgie de la boue for the Old Right, for a settled, manually ordered society where everyone has a place – in response to the dynamism and anxiety of early liberalism and capitalism. The proper place of socialism is actually on the Right, and the genuine form of socialism is national socialism. The attempt at internationalist Left-wing socialism (e.g. Marxism) was the odd man out, a bizarro hybrid of expansive liberal universalism and humanism with Old Right ideals of a stable, directed, centrally-ordered society. It’s actually international socialism that was a “Third Way”, not Fascism.
Fascism/socialism was the “old way,” liberalism/individualism/universalism was the “new way,” and always has been.
In my opinion the appearance of the word “socialism” in “national socialism” is more to do with a particular set of populists trying to sound appealing to both ends of the political spectrum than because national socialism is actually a form of socialism.
I think national socialism is very different from socialism.
You also seem to imply that national socialism is older than Marxism. I believe this to be false. A very brief google search seems to confirm my belief.
I dispute this claim. Many socialists desire to see a very dynamic society. Many reject tradition and everyone-in-their-place. And my guess would be that it’s more atheistic than liberalism.
“I dispute this claim. Many socialists desire to see a very dynamic society. Many reject tradition and everyone-in-their-place. And my guess would be that it’s more atheistic than liberalism.”
That’s what I meant by internationalist (Marxism-derived) socialism being the real “Third Way.” It actually is a mixture of liberalism (with its atheism/rationalism/universalism) with socialism. But “real” socialism is the older type of socialism that Marxists derided as unscientific, naive, “utopian” – Owen, Fourier, etc., but also the nationalistic organicism of precursors like Herder and Fichte, even Rousseau before that.
Those were instinctive (one might almost say “Luddite”) reactions to capitalism, not really grounded in an over-arching theory like Marxism, but more in a visceral dislike of the dislocation and constant revolution produced by liberalism and capitalism, and a yearning for a more settled, structured kind of social order. Note also the element of paternalism in that older kind of socialism.
Collectivism, organicism, that type of thing – that’s the mark of true socialism. And that is the nostalgia for the older way of life that had been left behind, a settled social order, without competition, and directed by someone “wise.” Also always connected with nationalism – liberalism was nationalistic too, of course, that’s the only unifying factor you can have when you get rid of the king, that sense of blood and soil, but in liberalism it comes out as civic nationalism, whereas with socialism it comes out as something more like an ethno state (notice how all the Communist societies eventually sleepwalked into something identical to fascism – that’s because the true “Third Way” of the Communist utopia was always unworkable, because there’s no international working class solidarity, but there is the ready made unifying factor of national/ethnic feeling, and that’s what any socialistic type of organization will always be forced to fall back on.)
You might be thinking of the futurist/dynamism element of Fascism, but that really is the “surface show” (as opposed to the normal way people think of it, that the socialism was a “surface show” for Fascists, and that Fascism was the “final stage of capitalism,” which is complete nonsense).
We’ve been looking at politics in a skewed way because of the post-War propaganda from the internationalist Left which wanted to distance itself from Fascism after WWII – prior to that, it was more like gangsters competing for the same turf and the same types of recruits. We mustn’t be fooled by the internationalist propaganda that it was the real form of socialism – that’s just the fag end of the hubris of Marxism as a supposedly “scientific” form of socialism.
Another way of looking at all this might be in terms of the Jewish influence on socialism – it’s the Jewish influence that mixes liberalism/universalism with socialism to produce internationalism/Bolshevism, but the real socialism is the organic ethnic European/White product (and even anti-Semitism is right there in the beginning of it – e.g. Fourier).
All this is complicated slightly by the fact that America never had an Old Right (except vaguely via the descendants of Southern cavaliers), it started off as a classical liberal polity. In Europe the connection between the Old Right and Fascism is clearer – but also the connection between those and a national socialist form of social organization (again, a planned, ordered, collectivist society, ethnically fairly homogenous, run by wise men).
Why do you insist that utopian socialism is more “true” or “real” than Marxism and its descendants (in which I would include democratic socialism)? I mean, I guess there’s the sense in which it came first, but if that’s what you mean, I recommend you use the word “old” or “original” rather than “true” or “real”.
By the way, I’m not saying that Marxism (or any other stream of socialist thought) is the “true” socialism. But it’s probably a more central example of socialism than utopian socialism, and *much* more central than national socialism!
I’ll tell you what wasn’t post-WWII: the liberal German republic siding with the fascist-inclined Freikorps to crush the socialist uprising of the Spartakists.
I disagree once more. First a pedantic point that they weren’t “communist societies” but “societies which wanted to achieve communism at some point in the future”. Second a more important point that they* never ended up being identical to fascism, unless your analysis is so superficial that anything authoritarian is fascist. There are a myriad reasons why the Soviet Union failed to achieve its aims. Modern Marxists (and socialists more generally) have, for the most part, learned from its mistakes.
*Just to be clear, I guess I’m talking about the Soviet Union, Cuba and China here. Perhaps you’re thinking of other examples that I know less about.
The conservative center left is remarkably stable, powerful, and popular. Every industrial country is a welfare state. True, the hard left is nuts, but they like being nuts. The ‘leave us alone’ right is a strong force, but ‘leave us alone’ is not a natural rallying cry for a vanguard party seizing power and reshaping society.
A welfare state tends to:
a. Create a constituency who will vote to maintain in.
b. Solve (or at least mitigate) a bunch of visible social problems that voters mostly want solved.
I think both of those tend to make it a stable arrangement in a democratic country–talk about dismantling it, and both the beneficiaries and the people worried about those social problems coming back/getting worse head to the polls, and pretty soon, someone else is making policy in your country.
> conservative center left
Make your mind up!
A combination of nationalism and social democracy works pretty well – this is basically what existed in the U.S. from about 1933-1980, and in most of Western Europe for the post-WWII era. Unfortunately, it’s difficult to talk about this in any kind of coherent way since the term “National Socialism” is irrevocably tainted by association with Hitler and his crew.
I had thought (and hoped) that Trump’s election would move the Republican party away from laissez-faire and towards a more activist economic policy (protectionism and advocacy of a social safety net). Trump had made gestures in that direction during the campaign, and it’s clear that most Republican rank-and-file voters are motivated by what could broadly be described as “cultural” issues (e.g. immigration and PC) rather than by soft libertarianism. But the business wing of the Republican Party managed to co-opt Trump rather than the other way around. It remains to be seen whether the millions of Americans who favor a decent social safety net but aren’t on board with open borders, 57 genders, privilege-checking, etc., will ever find a real political home.
West Virginia?
After reading Jonathan Haidt I’ve pretty much started to see the whole thing differently. The push towards “socialist left” won’t slow down, even as they paint themselves more and more into a corner, because it’s based on personality traits that seem to become more and more prevalent. Which explains btw why very smart very independent populations like Bay Area techies not only tolerate but actually embrace left ideology. See the whole Google Damore incident.
Haidt argues that the left generation sees things through a harm/care PoV before anything else. “Does this thing harm vulnerable people?” Than it’s bad. “Can I do something to help vulnerable people?” Then it’s good.
Normal, classical populations have a whole set of “lens” they use to decide right and wrong. They have a healthy respect for authority, for example, fairness, loyalty etc, in addition to the harm/care lens.
Interesting enough, libertarians are a minority that are about as focused as the left, but on independence.
What the whole thing means is that things can’t and won’t change dramatically. Popular political ideologies aren’t social constructs and can’t really be changed by social movements, they’re simply rallying points for like-minded categories. Which categories are defined by personality, and as such mostly change with generations.
And the other conclusion is that modern marxism is much easier to explain this way.
So, anyone have any opinion on how the South Korean government moves to de-anonymise and tax cryptocurrency exchanges will affect the proposed launch of Luna (and its Love Cryptocurrency of Stars) there?
Regarding price-jacks on old generics, the supposedly pro-competition FDA seems to be closing off the escape valve of compounding http://www.fdalawblog.net/2018/01/compounding-remains-an-fda-priority-agency-announces-2018-compounding-priorities-plan-and-several-compounding-guidances-including-guidance-on-essentially-copies-a/.
wow, ain’t that somethin’.
America seems to have a serious problem with power balance between the FDA and drug companies. The latter seem to have become so politically powerful as to just puppet the former.
Meanwhile over in europe EpiPen which cost $600 in the US Costs $69 in Britain. Be interesting to wonder why regulatory capture seems to have crippled the US health system while other countries seem better at resisting.
They aren’t even drug companies, really. That’s the most frustrating part about this. These price hikes represent pure, unadulterated rent. The original inventors and investors have long since been paid off.
The companies actually researching new drugs, even if they gouge and manipulate the system, at least add value to the world. The Turing Pharmaceuticals are parasites.
Right, but that value takes decades to manifest, and cannot be accurately quantified until it does. Meanwhile, there are bills to be paid today.
In order for drugs, etc, to be efficiently developed, it is not sufficient that their actual developers have the right to sell them for a time at monopoly prices. It is also necessary that they be able to sell that right to a third party, who will then have the right to charge monopoly prices for a drug they did not develop (but which would not have been developed without them waiting at the finish line).
Otherwise, drugs can only be developed by people with the talent and resources to run a drug-development program and the time horizon to wait another twenty years to get paid. That gets you much fewer drugs than if you just need the talent and resources for drug development.
And none of this has anything to do with Daraprim or Epi-pens, but you knew that because we’ve talked about it before.
Can you translate that link into ordinary English for we laymen who aren’t familiar with FDA nomenclature? I understand “compounding” to be the combining or altering of FDA-approved drugs to meet individual patient needs. But I don’t have a contextual feel for the scope of compounding or what’s important about it. Were the paralysis drugs described in Scott’s original post compounded or would permitting competition from compounding pharmacies impact that particular drug?
I don’t have a thorough answer for you, but I have one example. My kid takes a medication that’s compounded. In this case, the pharmacists are taking a drug that’s a pill and are crushing and mixing it with something (ora-blend is one option, apparently) to make a liquid medication. They do this for this drug frequently because it makes it easier for young kids to take (and it’s often taken long-term, so that’s nice) and possibly because it’s easier to get a lower dose this way. Downside is that no local pharmacies compound anymore (last one stopped doing it the first of the year), so we have to drive an hour+ to get it every couple months. When it’s time for a refill, we just have to call ahead a few extra days in advance to make sure they have some made.
JustToSay gives the traditional usage of compounding pharmacies, and it is a valuable one. Compounding pharmacies can make specialized drugs from generic chemical compounds.
A more recent valuable application of compounding pharmacies is that they can make drugs according to generic industry best practices on the basis of a license that says “we’ve verified that you’re good at this sort of thing generally and your facilities are clean, etc, so go ahead and make your one-offs”, whereas pharmaceutical manufacturers have to go through the FDA’s “we’ve looked at your specific process for making [X] and you can now mass-produce exactly and only [X]” approval process. So when some giant asshole figured out how to game the system to monopolize some obscure but vital drugs for fun and profit, some compounding pharmacists figured they could use their generic approval to fill a few thousand prescriptions for “specialized” “one-off” “custom” doses of Daraprim, etc, at rather less than Shkreli’s $750/pill.
While technically legal, this is clearly not what the regulatory regime for compounding pharmacies was meant for. So it’s understandable that the FDA wants to tighten their interpretation and enforcement of the rules. But the apparent vigor with which they are pursuing this, of all the many issues in their perpetually overflowing in-box, raises suspicions about their motives.
Long time lurker, first time poster.
I work in an industry that serves sterile compounders, so I have some familiarity with the FDA’s stance. I think a big part of the FDA’s motives have to do with the NECC compounding disaster. (Hope I posted that link correctly!)
The short version is that about six years ago, an outfit called the New England Compounding Center was producing large amounts of compounded drugs in insanitary conditions. The medication they produced became tainted with mold, and was administered to about 14,000 patients in 23 states. More than 70 patients died, and more than 800 survived but developed fungal infections as a result of the tainted medicine. It was the biggest disaster on record for the compounding industry, and it very much looms large in the minds of both compounders and regulators.
The FDA doesn’t want a repeat of the above, and the idea of sterile compounders mass producing medicine to compete with price gouging manufacturers likely sounds (to the FDA) waaaay too close to what the NECC was doing. I should be clear, I’m not claiming the FDA’s motives are purely benign. Just that they’re motivated by more than a desire to protect the interests of the Martin Shkreli’s of the world.
Thanks, that was useful. As with Thalidomide and the Dalkon Shield, I think NECC may be an example of hard cases making bad
lawregulatory policy, but in an understandable way that’s unfortunately going to be hard to counter.The thing about the African-American community’s pro-eugenics movement reminds me of how people often bring the Clintons to task for supporting tough on crime laws in the 90s, saying this proves that they don’t care about black people. They completely leaves out the part where African-American community leaders also supported these laws, so the Clintons’ support for them were not in spite of their links with the community, but in alignment with them. This is not to say the Clintons are not amoral power-mongers, just that their amoral power-mongering has enjoyed the support of many black leaders.
I would object that aligning your position with the “leaders” was not the same thing as aligning yourself with the “community.” Politicians in the 90s treated Jesse Jackson/Al Sharpton etc. etc. like they were the Kings of Black People and that by complying with their demands you had fulfilled your obligation the black population as a whole (which mostly did not benefit from or care what the “leaders” said)
From my understanding, “the black community” wanted these things, too. Black voters trying to raise their kids out of poverty were tired of drug dealers pushing poison and gang bangers on their streets. I don’t think there was a significant opposition to the increased policing policies from rank and file black voters. And crime has massively dropped off since the early 90s. This created all the new problems associated with mass incarceration, but nobody really predicted those at the time. It’s really a “be careful what you wish for” situation.
The main issue is that black people are treated like “the black community” in politics, and have been for a couple of generations at least. No other group this size in the US gets discussed as if they are all (or ought to be all) on the same side of every political disagreement, which leads to heavy pandering from both white and black politicians.
I wonder (non-rhetorically) if that’s largely due to them voting very heavily Democrat.
I think that is a major part, but the cause there is unclear to me. It appears that there was a major shift with JFK that has stuck.
Political journalists lump much bigger shares of the electorate together all the time. Per Pew Research Center: black voters were about 12% of the 2016 electorate. “White men” and “White women” are frequently discussed groups (figure about half each of 73% of the electorate, or about 36% each). “Millenials” are frequently discussed as a voting bloc (25% of electorate).
I will grant that there’s less breaking down of black voters into subgroups (notwithstanding the funny Key and Peele “black Republican” sketch). An obvious explanation here is that there’s less of an urban/rural or class divide between black voters than between white voters.
@ baconbits9 / @ Randy M
The story is a little more complicated than that. In effect, African-Americans were single-issue voters, very concerned with civil rights issues, when both parties had other priorities.
From the Civil War until about 1933, almost literally every black voter in America was a Republican. That’s not to say that the Republicans were always great on civil rights issues, but the Democratic Party was the party of segregation.
From 1933 until 1964, black voters were divided. Most still considered themselves Republican, but often voted for Democrats like FDR and Truman.
In 1964, the Republican Party’s presidential nominee was Barry Goldwater, a senator who had opposed civil rights legislation. This was traumatic for black Republicans. That year, the whole infrastructure of black Republican organizations largely collapsed. LBJ won over 90% of the black vote over Goldwater.
And from 1964 to the present, those numbers have barely changed.
In the decades since then, black voters have viewed Republicans with deep suspicion. The gulf has widened over time, because Republican candidates usually have little reason to campaign to black audiences or craft platforms to appeal to black voters.
When i talk about black community leaders, i don’t mean Al Sharpton and Luis Farrakhan; i mean pastors, aldermen, city council members, the literal leaders of black communities.
From The Atlantic:
“Using the District of Columbia (a k a “Chocolate City”) as his laboratory, Forman documents how, as crime rose from the late 1960s to the ’90s, the city’s African American residents responded by supporting an array of tough-on-crime measures. A 1975 measure decriminalizing marijuana died in the majority-black city council, which went on to implement one of the nation’s most stringent gun-control laws. Black residents endorsed a ballot initiative that called for imposing harsh sentences on drug dealers and violent offenders. Replicated on a national level over the same period, these policies led to mass incarceration and aggressive policing strategies like stop-and-frisk, developments that are now looked upon as affronts to racial justice.”
“In the nation’s capital, widespread jury nullification in drug cases coincided with the city council’s passage of a law in 1994 that took away the right to a jury trial in many misdemeanor cases. As a result, D.C. residents have fewer rights to a jury trial than do the residents of most states. More-vivid evidence of deeply mixed impulses with perverse consequences would be hard to find: A law passed by a majority-black city council protected prosecutors (who remain majority-white) from the judgment of majority-black juries.”
Medium provides an alternate take, saying that the Congressional Black Caucus only went along with the 1994 crime bill because they feared if they didn’t something even worse would come along. However i believe Slate’s more nuanced take is the closest to the truth. It shows that the community was divided on the subject, but there was in fact significant support among the black community in favour of the bill. For example while Jesse Jackson was strongly against it, a coalition of black pastors wrote a letter to the Black Caucus urging them to vote for it, as did ten black city mayors. So the Caucus was in fact under lots of pressure from their constituents to vote for it in spite of their misgivings.
The AI Bias Doesn’t Mean What Journalists Say It Means post is refuting the simplest (and wrong) complain of AI bias, but the problem does exist. The real issue is that using an algorithm gives the results an aura of objectivity when their are simply making existing bias explicit. Machine learning algorithms will find real probabilistic dependencies between the data, but the data itself is biased so the results are biased.
This is very easy to see in criminal data. If blacks are more likely than whites to become subject to criminal prosecution due to racism (true), then using data from the justice system will tell you that blacks are more likely than whites to commit crime even if the opposite were true.
In cases when the data is biased machine learning is precise but not accurate. And this distinction is too often lost.
When they *might* be making existing bias explicit.
Though he hits the nail on the head re: the vast majority of journalists and the audience.
Lets try an example where not everyone is hopelessly mindkilled:
In the EU it’s now illegal for auto insurance companies to discriminate on gender.
The companies are still allowed to discriminate on other metrics so they can take into account things like “# of DUI’s”, “# of traffic violations” and “# of accidents” etc and people with lots of them can get quoted higher prices.
Sadly, in reality young men genuinely are more likely to be in more accidents and do more stupid stuff.
Now lets inject “JournalistLogic”. They come along, show that “# of DUI’s”, “# of traffic violations” and “# of accidents” correlate with gender. They insist that this means the system is “biased” and “unfair” because young men are still getting quoted higher prices on average thanks to having worse records on average.
Of course there’s a chance that cops are pulling young men over more, breath testing them more or letting young women off the hook for more things. But there’s also the strong chance that young people with multiple accidents and violations on their driving record are genuinely crappier drivers and more such people are male.
but “JournalistLogic”ignores that last possibility and declares the statisticians and company evil and discriminatory.
your argument also only really holds any water re: the justice system and doesn’t really seem to work vs the finance sections of the post.
I agree with you. My response was to the blog post not the ProPublica article, which I identifies the problem correctly but is wrong on the specifics.
My point is not that algorithms can never be used, rather that there is a problem in the way they are currently used. People assume that the results are accurate because data and algorithms were used. But very often whatever statistical dependency the AI found was just bias in the data, or maybe over-fitting some statistical noise (no way to know since the algorithm are private).
Algorithms that meaningfully impact a person’s life should have mechanistic explanations, the higher the stakes the better that explanation should be.
For your example I can think of simple explanation (maybe wrong): hormones and testosterone make young men engage in riskier behavior.
Regarding finance. What proxies for blackness are they using? Denying loans to unemployed people discriminates against blacks because they are more likely to be unemployed but is still a sensible decision. Denying loans based on a address or the place of birth is nonsensical, and very likely a reflection of bias in the data.
It’s typically easy to *guess* at a mechanistic explanation.
Young men take more risks but also have faster hand eye coordination… yada yada yada, the million effects of different hormones can be used almost as easily for just-so stories as evo-psych.
Perhaps poor people from already-marginalised groups have less practical incentive to avoid breaking the law, after all, if you’ve got no job and few prospects and you’re already stigmatized there’s less fear of the stigma of a criminal record, when you’re already at the bottom of a hole there’s less fear of falling into the hole.
Perhaps people who grew up in a culture/community where half the people they know are being chased by debt collectors, their parents know all the tricks to avoid debt collectors, their friends are also in debt, they consider it socially normal and are less likely to be bothered by failing to pay back a debt. Meanwhile someone from a culture/community where it’s considered deeply shameful to be in debt or fail to pay a debt and it’s rare in their community for people to default: might they be vastly more likely to do whatever they can to pay it back?
If you found your data showed that people born into the first community defaulted at a higher rate than people with equivalent other metrics (income etc) from the second community would you be surprised, would that be a sign of bias or observation of real behavior?
If you’re willing to accept mechanistic explanations as general as hormones the above aren’t unreasonable.
That’s another problem with using algorithms. The point I was trying to make is not that the justice system is unfair — I believe it is, on all three measures of fairness — but I do not know enough to be certain. My point is that finding statistical dependencies in current data defines fair as whatever the current system does. If someone claims the algorithm is fair, I want to see a measure of fairness.
There are ways to use algorithms for things like predicting recidivism or loan repayment. But the companies claiming that their algorithm is an accurate description of the way the world works need to prove that. Right now, they only prove that their algorithm accurate represents the data, but the data is assumed to be an accurate representation of reality. There is always going to be bias in this type of data because it was generated by biased humans. Perhaps the bias cancels itself out, but you need to prove that.
Whether I accept them or not is irrelevant; the important thing about mechanistic explanations is that they are relatively easy to prove and you can quantify the effects. They are first principles, to use a physics analogy.
@juribe
You are making the fairly typical mistake of confusing the way the algorithm is used with the quality of the algorithm itself. This makes people want to fix problem in the place where they don’t originate, which causes many issues:
(1) Imagine a gun with a camera on it, that refuses to fire if the target has a pale skin color. Such a gun is racist.
(2) imagine a basic gun that just fires when you pull the trigger, so any target it treated equally. Such a gun is not racist.
(3) Imagine that a white supremacist uses this gun to only kill black people, now (shooter + gun) is racist, but only because the shooter is.
We could now try to fix this by putting a camera on that gun, that refuses to fire if the target has a black skin color. Such a gun is racist as well, although in the opposite direction of (1). The idea is then that gun that is racist in favor of black people + white supremacist = non-racist outcome.
However, if we only allow the selling of such guns, we have to really be sure that only racists use them, because otherwise we get non-racist + gun that is racist in favor of black people = racism in favor of black people and/or black supremacist + gun that is racist in favor of black people = racism in favor of black people.
What I see a lot is that bias is assumed to exist and that people want to act on their assumptions, but when actually quantifying the data, we see a lot less of it than is assumed.
So you can’t just assume that bias exists, you also have to be aware that there is probably a lot of bias about the bias (meta bias). So I absolutely don’t trust systems where we try to counter perceived bias unless it’s proven that the bias is limited and/or not in the assumed direction.
If you want to act against bias, you should also act against meta bias.
I cannot tell whether you mean that a guilty black is more likely to become subject to prosecution than a guilty white (or perhaps innocent black and innocent white) because the justice system is racist or that blacks are more likely to commit crimes because the society is racist. The second doesn’t fit your “data from the justice system will tell you that blacks are more likely than whites to commit crime even if the opposite were true.”
For the first, Scott had a fairly detailed discussion of the evidence which concluded that it was not clear it was true–hopefully someone else remembers the title, because a quick look at the Archive didn’t find it for me.
I mean the first thing.
This is the post you are looking for, but I would say it does show significant racial bias in policing, read the discussion at the end again. I would add that most of the studies that found no bias were done in the 70’s and 80’s and are highly suspect. Ubiquitous cameras have made it very clear that police lie often, and everything is a justified kill.
There is no reason why the amount of melanin is someone skin would lead them to commit more crimes.
However, this is not the causal path anyone not crazy is proposing.
It’s more like (1) people from different cultures tend to behave somewhat differently, (2) people sort into cultures partly based upon genetic factors including skin color – this sorting may or may not involve discrimination, and (3) crime is a behavior. Even this is badly oversimplified but it gets the gist of things right.
I have never heard a good theory that can explain the massive disparities in homicide rates between the overall “white” U.S. population and overall “black” U.S. population as being due to black murderers being more likely to be caught or something blatantly discriminatory like that.
A systemic explanation could work, but doesn’t really help make a moral decision about justice. For example, maybe poverty leads to more homicides and the oppression of black people by white people has elevated the number of black people in poverty so much the homicide rate went up among black people. Even if this was true, there are lots of good reasons that you wouldn’t want to adjust a criminal justice system to even out this disparity. Instead you’d go about tackling the poverty problem. Indeed since most victims of the small minority of black criminals are the vast majority of law-abiding black people, it would seem perverse to reward the first group because it happens to be part of a super group along with the second group.
Incidentally, I’m not saying the whole last paragraph is true. I don’t really think poverty causes homicides although I think the rest of my example is roughly correct. I’m just using it to illustrate the issues at hand.
For example, maybe poverty leads to more homicides and the oppression of black people by white people has elevated the number of black people in poverty so much the homicide rate went up among black people.
I’m going to say “drugs”. The people in the news in my country shooting each other every day are criminal gangs fighting over drug dealing turf wars.
You could say “that’s down to poverty, and singling out people from a certain neighbourhood of the city (and similar neighbourhoods in Limerick) and/or of a particular protected minority group is all bias and unfairness and prejudice” but if you’re looking for general criminality and violent crime and killing each other, then those are the areas, those are the people, and it’s down to gangs and drugs.
Certainly poverty has an effect on criminality, but mostly the answer here is not “poverty”, it’s “criminals”.
The larger issue is that that’s exactly the path a lot of people argue. Talking about black or white culture is meaningless. There is no culture lumping together Haitians, Nigerians, and black Americans, but these are all blacks. If anything black american culture is closest to white american culture.
I’ll take your example at face value.
The opposite is adjusting it to keep a justice system that unfairly punish black people because the current situation is already biased against them. I mean the path you are proposing goes like this: whites oppress blacks making blacks poorer; and blacks commit more crime because they are poorer. Even assuming the justice system is 100% racism-free and accurate, you end up with a situation where you are punishing blacks due to being oppressed. There are better places to correct that than the justice system, but we should be working very hard to fix the underlying injustice; we are not so some leniency in the justice system is fair.
@juribe
problem is: you’ve got multiple competing standard for fairness/justice.
If your kid gets raped by someone it’s probably not terribly fair/just for the judge to say “Yes this guy did rape her but he’s from a minority group and a poor background which may have, statistically, contributed to his likelihood of becoming the kind of person who does what he did. yes this is his 3rd time doing this but we need to balance up the stats vs richer people so we’re going to let him off the hook this time with a warning”
unfortunately failing to do the above on a systematic scale can then be described as “well he’s a criminal because he’s poor and oppressed and now you’re going to oppress him even more by putting him in jail so you’re basically going to oppress him as punishment for being oppressed”
wanting to treat all groups such that they all end up with identical combined stats comes into direct conflict with wanting to treat each individual as a person who isn’t entirely defined by their group.
Whichever version you choose in an attempt to be “fair” or “just” you end up being horribly unfair according to the other metric.
@juribe
There are a lot of people in the world. If 1/1000 people are crazy, that’s more than enough for there to be a lot of crazy people. Best not to deal with them when unnecessary.
And sure, black culture and white culture are not each one culture just like people of a given race aren’t genetic clones of each other. It’s just convenient shorthand for talking about populations. And obviously, it’s pretty specific black/white/whatever subcultures that completely dominate things like “number of homicides committed”. But speaking of lumping people into groups…
You seem perfectly happy to lump “innocent black people” with “probably guilty black people” with “almost definitely guilty black people” for the sake of evening out some disparities in the overall group. If black culture is not a valid construct to speak of, then your plan doesn’t make any sense either.
Many people are going to view adjusting a target several steps down the causal chain often in favor of people who are actually guilty as being extremely corrosive to the idea of justice.
I am totally on board with trying to get the police to be more polite in their interactions with African Americans. I am on board with trying to reduce the number of police shootings (although I’m not very hopeful that anyone is going about this in a sane manner). I am on board with legalizing marijuana and maybe even harder drugs. All of these things would likely reduce the statistical disparities we see between the black and white population. But I am against adjusting how we sentence convicted criminals so that it explicitly takes race into account in order to directly adjust a disparity if that disparity didn’t result from bias in that sentencing process. Directly creating legal privileges in order to paper over underlying problems is a bad idea.
@Deiseach
Agreed, drugs is probably one of the most influential causes/effects.
From the end of Scott’s piece:
Wouldn’t the factors for which there seems to be little or no racial bias be the ones you need for your argument? People who receive capital punishment are not in a position to reoffend, and people with long sentences mostly are not.
Your melanin point is just silly. Amount of melanin in the skin correlates with lots of other things, some of which might affect propensity to commit crimes.
Blacks have been over-represented relative to their proportion of the general population by 3:1 in terms of executions. This is almost always taken by some academics and the media as evidence that the death penalty is racially biased. The disproportionality argument is repeated without giving any consideration to the logic behind it. This argument fits with the ideological views of death penalty opponents, including those of mine. I’m against it as well. This disproportionality in the percentages of each race executed must be compared not with their proportion of the general population, but with the percentage of each race eligible to be included in those sub-populations (the correct target population is each race’s proportion of murderers with its proportion executed). Black homicide rate averages 12.7 times higher than the white rate. Blacks are over-represented in every homicide category, ranging from 66.7% of drug-related homicides to 27.2% of workplace homicides [1]. The Radford University’s Serial Killer Information Center finds that blacks have been 57.9% of serial killers in the U.S. from 2000 to 2014; whites 34%, Hispanics 7.9%, and Asians 0.06% [2]. And: “Although they are over-represented among death row populations and executions relative to their share of the U.S. population, blacks are underrepresented based on their arrests and convictions for murder [3]”. In California, Nevada, and Utah, blacks were overrepresented on death row relative to their proportion of murders in those states, but that in all other 28 states they were underrepresented. In Nevada the percentage of black murder offenders during that 22-year period was 30.2%, and the percentage of blacks on death row was 33.1%. In Utah the respective percentages were 8.6%and 10.5, and in California they were 33.8%and 35.3%. Blacks are overrepresented in proportion to the proportion of murders they commit in these three states. These differences are minuscule compared to the remaining 28 states, where they were underrepresented to a large degree [4]. It seems from these data that white murderers are proportionately more likely to both receive a death sentence and to be executed for death-eligible homicide. A 2001 U.S. Justice Department (2001) study of federal death-eligible cases reached a similar conclusion in federal murder cases: “United States Attorneys recommended the death penalty in smaller proportions of cases involving Black or Hispanic defendants than in those involving White defendants; the Attorney General’s capital case review committee likewise recommended the death penalty in smaller proportions of involving Black or Hispanic defendants…. In the cases considered by the Attorney General, the Attorney General decided to seek the death penalty for 38% of the White defendants, 25% of the Black defendants, and 20% of the Hispanic defendants. Why we should see a disproportional number of white murderers receiving the death penalty relative to blacks is a question yet to be addressed in any strong systematic fashion. The 2006 Rand study didn’t find racial bias (https://www.rand.org/news/press/2006/07/17.html). And the 2000 U.S. Department of Justice didn’t find it too (https://www.justice.gov/archive/dag/pubdoc/deathpenaltystudy.htm).
With all the evidence now showing that black murder defendants are less likely to be sentenced to death than white murder defendants, the race issue has become victim- centered rather than defendant-centered. The literature consistently shows that killers of whites (regardless of the killer’s race) are more likely to receive the death penalty than killers of other racial groups. I think that is your goal with these stats. However, the issue of victim-centered racial disparities in death penalty is still riddled with moral panic and divided along ideological lines. Because homicide is overwhelmingly intraracial, this is the primary reason that whites are more likely to receive the death penalty. Prosecutorial reluctance to seek the death penalty for blacks might be revealing a devaluation of black victims vis-à-vis their white counterparts, and a perception that black-on-black crime is not a threat to the white power structure. The famous Baldus et al. study did find that “The odds of a defendant who killed a white victim are 4.125 times greater than the odds of receiving a death for a defendant who killed a black victim.” The real difference is the difference between a considerably less egregious 99% and 96%, if you one doesn’t confuse probabilities with odds, or the ratio between the respective odds represented by their probabilities. In the Gross and Mauro (1989) data, 28.8% of blacks who killed whites under felony circumstances received a death sentence versus 6% of blacks who killed other blacks under similar circumstances. Whites who killed blacks under felony circumstances received a death sentence 18.2% of the time [6]. Thus whites who kill blacks are more likely to receive the death penalty than blacks who kill blacks, although this must be viewed in light of the fact that whites only commit about 5% of interracial murders [3]. However, concern about race-of-victim bias motivated the National Institute of Justice to commission three independent teams to examine the role of race in the application of the death penalty in federal cases [7]. And they did find that death penalty-eligible violent crimes committed by black are specially heinous and violent: “When we look at the raw data and make no adjustment for case characteristics, we find the large race effects noted previously—namely, a decision to seek the death penalty is more likely to occur when the defendants are White and when the victims are White. However, these disparities disappear when the data coded from the AG’s [Attorney General’s] case files are used to adjust for the heinousness of the crime. For instance, [one of the studies] concluded, “On balance, there seems to be no evidence in these data of systematic racial effects that apply on the average to the full set of cases we studied.” The other two teams reached the same conclusion.[One team] found that, with their models, after controlling for the tally of aggravating and mitigating factors, and district, there was no evidence of a race effect. This was true whether we examined race of victim alone . . . or race of defendant and the interaction between victim and defendant race.”[the third study’s author] reported that his “analysis found no evidence of racial bias in either USAO [U.S. Attorney’s Office] recommendations or the AG decisions to seek the death penalty.”
A recent study by Sharma et al. (2013) looked at all first-degree murder convictions in Tennessee from 1976 to 2007 [8]. They noted that prosecutors sought the death penalty for 76% of white defendants and 62.6% of black defendants, and that 37.3% of white defendants for whom the death penalty was sought received it versus 23.6% of black defendants. Prosecutors sought the death penalty in 64% of the cases where the victim was white, and in 33% of the cases where the victim was black. When controlling for a variety of aggravating and mitigating factors, as well as demographic, and evidentiary variables, they found that the killing of a law enforcement officer or child, a history of prior violent offenses, and all evidentiary (scientific, co-perpetrator testimony, confession, and strong eye witness testimony) variables were by far the strongest predictors of receiving a death sentence for defendants of any race. Their most important finding was that victim’s race did not play any significant independent part, but victim’s sex (female) did. The racial make-up of the crime (black offender/white victim; white offender/white victim, etc.) had no significant independent effect, but race of the defendant did, with whites being significantly more likely to receive the death penalty than blacks over the 30-year period. Specifically about female victims of black attackers, there are interesting data on homicide and rape and racial differentials. During a 45-year period, blacks were arrested for rape an average of 6.52 times more often than whites. More interesting than the black/white ratio of rape arrests is the fact that although the great majority of homicides are intraracial, because of the frequency with which blacks choose white victims (up to 55 percent according to some accounts), there is evidence that rape is an interracial crime [9][11]. The main common explanation for this fact is that the racial composition of a city, representing the pool of rape victims or offenders of a particular race, and the degree of black-white residential segregation are significant predictors of the racial patterning of rape [10]. Felson and South victimization survey in 26 U.S. cities found that blacks chose white victims 41.3 percent of the time. The number of white offender/black victim rapes was approximately 1%. They also found that black rapists in cities with populations that are about 50% percent black, a white women is 2.4 times more likely to be raped by a black man than a white man. Felson and South do some tests and they think a “macrostrutuctural opportunity model”, the available pool of offender and level of segregation, and the opportunities of contact, explain these differentials and the intraracial aspect of this crime against whites. I find this explanation plausible, but elusive. Their own datum shows that in a 50/50 split in racial composition (or 50/50 in the “available pool” of rape victims), white women were still 2.4 times more likely to be raped by a black man than by a white man.
Baldus looked at a number of aggravating and mitigating circumstances and basically argued that with zero or one aggravating factor in a murder case there was no racial discrimination regardless of the racial makeup of the victim/offender dyad. Similarly, with multiple aggravating factors (such as multiple victims, a prior homicide conviction, child victims, torture, and so forth), there was no discrimination and the risk of a death sentence was high regardless of the racial makeup of the victim/offender dyad [5]. However, it was at the middle range of aggravating circumstances where the “correct” sentence is less clear that racial disparities appear, and here is where the notorious “4.3” odds ratio came from.
In an analysis of the same Georgia data used by Baldus, Katz (2005) examined all aggravating and mitigating circumstances [12]. In the 1,082 homicide defendant sample, 141 cases involved a white victim and a black perpetrator. In 67.1% of white victim-black perpetrator (W/B) cases the victim was killed in the course of a robbery compared to 7.4% in black victim cases, and in 70.6% of the W/B cases the victim was a stranger compared with 9.6% of black victim cases. Katz (2005) also indicated that “White victim homicides show a greater percentage of mutilations, execution style murders, tortures, and beaten victims, features which generally aggravate homicide and increase the likelihood of a death sentence” (p. 405). Katz (2005) cites a number of other studies finding similar results in 10 different states; that is, once the full array of aggravating and mitigating factors are considered there is little or no discrimination evident in white victim-black perpetrator cases that is not accounted for by aggravating circumstances and other legally relevant variables. Cassell (2008) explains the nature of the victim-offender relationship: “Black-defendant-kills-white-victim cases more often involve the murder of a law enforcement officer, kidnapping and rape, mutilation, execution-style killing, and torture—all quintessential aggravating factors—than do other combinations [13]”.
Jennings et al. (2014) estimated the white victim effect using traditional statistical (logit regression) models controlling for 50 legal and non-legal factors and found that “the “’White victim effect’ on capital punishment decision-making is better considered a ‘case effect’ rather than a ‘race effect'” [16]. In other words, each case is unique in that it contains a multitude of case characteristics (aggravators and mitigators) and evidentiary qualities that have to be considered. Given the ability to case-match (albeit, imperfectly), this quasi-experimental approach is currently the best that we have to tease out any discriminatory effects that may be present in sentencing.
continues here: https://medium.com/@insilications/racial-bias-is-small-or-non-existent-in-death-penalty-sentencing-disparities-2a2671dea480
@DavidFriedman:
The piece in question was Race and Justice: Much More Than You Wanted To Know.
I don’t remember quite what he found, though, other than that it was less clear cut than the people around me thought.
If my memory serves me, he also doesn’t look at prosecutorial discretion at all, which surprised me at the time. I could see that weighing either way, though- there are credible narratives both for biased prosecutors targeting black people, and prosecutors overcorrecting to avoid being in this category.
“A Medium article by some Google AI scientists and a professor, in my opinion, definitively explains away that “AI can analyze your face to tell if you’re gay or not” paper from a few months ago. They hypothesize that the AI was just looking at a couple of non-physiological features – glasses, makeup, facial hair style, tanning. Then they put their money where their mouth is and prove that an algorithm based on these features can mostly match the original AI’s performance. I continue to find it disappointing that flawed papers make it into big-name journals while correct rebuttals of them languish on Medium.com.”
AI researcher here. You’re 100% wrong. The “gaydar” paper actually had results using facial landmarks which wouldn’t be effected by the changes in the medium paper related to glasses and such. At the very best the medium paper brings up potential issues but doesn’t demonstrate that they’re a problem for this paper.
https://www.reddit.com/r/MachineLearning/comments/7q2cei/r_high_quality_open_peer_review_of_the_sexuality/dslzwtw/
In the article about Google Maps, it was cool how the algorithmic areas of interest (AOI) lined up with people’s mental maps of San Francisco.
I do have the high-school dream that everybody else has, but also have a very similar one. I’m a musician and have been performing live for a dozen years or so. I don’t really have stage fright any more, but in this dream, someone hands me an instrument right as I’m about to go on stage that I have no idea how to play. (Usually the trumpet where in real life, I play the piano and a few woodwinds). In every case, I just go up there and try to fake it, and in every case, it works perfectly, and I play the instrument well and nobody knows the difference.
The fun experiment would be for you to learn to play the trumpet, and then see if you still get “can’t play the trumpet but manage to fake it” dreams, or if not, which instrument replaces it as your psyche’s go-to thing-I-can’t-play.
From the article on AI bias:
“The authors refused my request to provide the original data, so my numbers were obtained from the graph using a ruler and eyeballing graph ticks.”
Shouldn’t that set off alarm bells?
I got this one pulled on me when I just started playing in 2003. The guy gave me a shovel and made a crack about “…you’re gonna need it, heh.” I thought it was just him joking about how he was going to kill me, but this makes a lot more sense now.
(I managed to run away from him anyway, so guess he just wound up losing a shovel. Take that!)
Wouldn’t take the uniform prior on this one, especially since this literally kept happening until Google (inc. YouTube) existed, then stopped.
Sample size of two, and one of those was a comprehensive revision of copyright law, not a simple term extension. Prior to 1978, copyright lasted twenty-eight years and could be extended for another twenty-eight. Stuff fell out of copyright all the time under this regime. The Copyright Act of 1978 overhauled almost everything about American copyrights, and in the process changed that scheme to life of the author plus fifty years. Then the Copyright Term Extension Act of 1998 changed that to life of the author plus seventy years.
I don’t think that’s enough to draw any strong conclusions on.
It’s interesting that they hypothesize this, given that the paper they’re refuting states:
I wasn’t aware that creating a performant classifier using a set of features A “refutes” a performant classifier that uses a set of features B, but what do I know?
That Civilization 5 mod is outdated already, because the foolish meatbags released Civ 6. Now everyone will ignore this educational tool in favor of the new hotness in Civ 6.
On the other hand, humans are already accepting their fate as paperclips, so someone made a mod for Stellaris where you can adopt a “paperclip maximizer” policy.
http://steamcommunity.com/sharedfiles/filedetails/?id=1270179464
Everything you write is explicitly considered in the original paper. In no way are those flaws or is the new paper a rebuttal. Its main accomplishment is to trick you into believing these falsehoods.
The original paper identified all these mechanism and tried to isolate them. Identifying features does not “explain away” identification. It explains away physiological claims, but—do I really need to say this—the original paper does not claim that glasses are physiological. It isolates the shape of the face and only suggests that this weaker predictor is physiological.
The new paper does make one real rebuttal, but you didn’t read far enough to hear about it. It claims that the shape of the face is fake, really just camera angle, which the original paper attempted to correct for, but which has some residual. The new paper does provide one interesting piece of evidence about this:
Thanks, I’ve somewhat changed the text of the link and will change it further after I look into this more.
Guillermo del Toro saying the UFO design was ugly tells me he saw real aliens, as either aliens won’t share our aesthetics, or they just made something v practical that worked, regardless of whether it looked nice.
I mean, if you had GdT saying the UFO looked cool or something, you’d know he was imagining it.
What is the plausible mechanism for UFOs being real extraterrestrial vehicles?
It needs to explain why they keep coming when STL interstellar flight is so expensive and FTL thought to be impossible (maybe the UFOs are coming from a comet converted into an O’Neill habitat?) And how to rule out the simpler explanation “all sightings are inspired by science fiction.”
After all, didn’t the first flying saucer sightings happen in 1945? This postdates HP Lovecraft’s “Whisperer in Darkness”, about how alien scientists have been sending missions from their outpost Pluto to study humans as long as we’ve existed…
Maybe just regular ass real slow sub luminal flight?
UFO/Alien sightings have been most def been contaminated by popular culture.
I dunno why people question why aliens would come to visit us; humans regularly used to make years long exploration trips around the globe. If you are an immortal race of aliens/AIs, just going for a couple of hundred/thousands of years to go look at other aliens would seem an attractive offer to some of them.
Not that I believe that UFOs are aliens. I just find v interesting/funny that GdT would call one of them “ugly”.
I don’t question that aliens would spend a couple of years one way on an expedition to study us. We’re getting into sketchy territory when we assume that they’re immortal and this makes them willing to spend a couple of hundred years or more cooped up in a flying saucer.
Have the saucers be coming from a permanent base made from a comet and the energy/economics suddenly become much less sketchy. The big question then becomes why we can sometimes detect them in the atmosphere but never on their way to Earth.
@Nornagest: definitely. And then I become skeptical that it could possibly be true because it pattern matches the fictional Fungi from Yuggoth.
If you can do cheap fusion power and you don’t mind long travel times, the Kuiper belt is surprisingly friendly to colonization. Millions of conveniently sized rocks full of water and other volatiles and no gravity wells to speak of.
Not that I think this is likely. But if I was a science fiction writer and I wanted to justify my Fungi from Yuggoth with current science, that’s where I’d start.
Not exactly. Here’s the full quote:
In other words, it didn’t look ugly in a not-designed-by-humans way, it looked like a classic flying saucer (as imagined by humans). It’s just that del Toro thinks the standard human design for a flying saucer is ugly.
On the note of the flying saucers. The ophanim order of angels are described as looking like flying chariot wheels covered in eyes. You know what else looks like a flying chariot wheel covered in eyes? The stereotypical flying saucer.
It seems reasonable to me that the popular conception of UFOs is based on reported sightings, so whatever a UFO is, it’s going to look cliched.
How many independent pieces of evidence updating you to 50:50 on near-Cartesian doubt does one need before it becomes time to find/build a new foundation for your model, even if you would have initially put very low odds on the new foundation?
I’m honestly feeling a lot of confusion here and a lot of sympathy. It occurs to me that it might be more efficient for you to continue to explore the old model while periodically reminding us that it finds the hypothesis that people on the internet sometimes lie and some other claims surprising, just as wave and particle models both found some of the other’s predictions surprising back in the day.
C’mon, nothing on the predictions?
As someone who is also skeptical about current claims of AI, I found it an interesting bag.
He expects fairly rapid (NET 2021) dedicated lanes allowing only driverless cars. I doubt it. You can’t keep human-driven cars out of the lanes reliably, so you can’t have dedicated lanes if they’re a safety concern. If they aren’t a safety concern… why do you do it again?
He expects the first driverless taxi programs to have dedicated pick-up spots, and expects then NET 2022. Google seems really interested in launching a non-dedicated-pick-up taxi service in Phoenix this year. I mean, maybe it will be a spectacular failure, but even my grumpy skepticism thinks that Brooks is being too skeptical here.
He also expects a true driverless taxi service in a major US city NET 2032. That’s pretty extremely skeptical! I feel like my expectations are that optimists say that happens in 2019 and pessimists say it happens in 2025.
He thinks that popular press will believe that the era of Deep Learning is over by 2020.
He seems to think that robots will get good enough to be consumer products in the 2030’s.
He expects fairly rapid (NET 2021) dedicated lanes allowing only driverless cars. I doubt it. You can’t keep human-driven cars out of the lanes reliably, so you can’t have dedicated lanes if they’re a safety concern.
Well, there are safety concerns and then there are safety concerns.
If you had a lane that human-driven cars couldn’t legally be in, that would take out a really big fear that a computer-driven car would face, say, a financial penalty 10x than a human driver in the legal system in an accident between one of each. (Which may or may not happen: we don’t know how juries will see it.) If the human-driven car should not have been there, that can cause a significant decrease in legal risk.
Also, you’ve made the problem of driving significantly easier. So easy that cars being sold right now could go in that lane (adaptive cruise control with lane steering).
I’m not saying these lanes are a good idea, or that it’s the way we’ll get to auto-cars; just that they aren’t a bad idea in the way you say.
But you can use cars with lane-keeping and cruise control in normal lanes. Why do we need a lane that is only used for driverless cars? Either they’re safe enough to use in normal lanes (in which case great, use them in normal lanes), or they aren’t safe enough in which case holy shit guys you can’t drive a car on a freeway that’s not safe to drive on the freeway with a painted line and some signs saying, “No, seriously, these cars are unsafe.” That doesn’t work, and if you think that would shield you from liability, you need better lawyers.
Dedicated roads could work, with controlled access. Dedicated lanes can’t.
Start with this assumption, which may not be true but I ask you to accept it anyway because it’s where Brooks is coming from:
* Auto-cars are safe as long as they are dealing with other auto-cars and the road. They can be counted upon to stay only in automatic lanes.
Now, given dedicated lanes, if some human-driven car shoves their way into those lanes, there may (or may not) be a crash, but legally the auto-driven car is clear.
If there is a crash because someone else violates rules of the road, that doesn’t necessarily mean the auto-car was unsafe.
I’m not sure if you drive, but every day I drive on a road where there is literally nothing besides a painted line stopping me from going into oncoming traffic. Those silly rubes probably wouldn’t even expect it. But it doesn’t make them unsafe drivers because I can break the rules of the road and crash with them.
That doesn’t work, and if you think that would shield you from liability, you need better lawyers
Wait, did you think the car companies would just go out and, like, declare a bunch of lanes are for use only by auto-cars? No, no, no, this would be the local governing authority declaring such a thing, because progress/safety/whatever. They don’t need lawyers; they are writing the law that fault is presumed to be with the car that violated the road markings.
Yeah, nobody is going to do that. Nobody is going to say, “It’s okay for automatic cars to be unsafe to human drivers, as long as they’re safe to other drivers, on a road with just a lane marker between them and ordinary drivers.” And a municipality or even state would have a super hard time indemnifying the driving company from liability if they tried to, because people would find ways to sue in federal court.
But it doesn’t matter, because state lawmakers aren’t going to indemnify auto makers under those conditions, because state lawmakers kind of hate it when people show up pictures of people dead in horrific car crashes and say, “Joe Suchandsuch says that Google can’t be held responsible for the fact that their car plowed into this minivan with three toddlers in it.”
It’s completely different from current traffic rules because we say that if Mary swerves into the oncoming lane and plows into a minivan, it’s her fault, and also because Mary does not have 150,000 identical clones who are clearly susceptible to swerving into oncoming lanes.
And because driverless car makers don’t even want this rule. They know that if there was a driverless car lane, it would move faster than the rest of traffic, and so people would constantly jump into it and drive in it, and if that produced a constant stream of crashes, even if they were not held liable for it, they’d still eventually pick up the reputation that their cars were death-traps.
Finally, the whole thing is silly. Automatic lane-keeping/adaptive cruise controls on freeways already exists, and people already can and do use it in mixed lanes.
When I said:
Start with this assumption, which may not be true but I ask you to accept it anyway because it’s where Brooks is coming from:
Did you accept it, or reject the hypothetical without saying so?
Seems like “lobbying is ineffective” (whether measuring the passing of “favourable legislation” or comparing bottom lines) is very different from “money doesn’t really matter in politics”. The possibility left out here is that lobbying has all sorts of effects (perhaps some beneficial) just not the ones businesses paying for lobbying want. It seems plausible for instance that lobbying takes attention from politicians who need to lend ears to get their campaigns financed, distracting them from whatever else they might be doing.
There is even an interesting legal principle that the POSSIBILITY of APPEARANCE of corruption has detrimental effects, even if it never actually happens:
https://en.wikipedia.org/wiki/Appearance_of_corruption
This is the beating heart of the best campaign finance reform arguments, IMHO, but there’s a lot of evidence to suggest the public is just always gonna see corruption anyway:
http://www.jstor.org/stable/4150623?seq=1#page_scan_tab_contents
Hi Guys,
I thought I’d throw in my 2 cents on the political spending thing.
During the 2012 election I worked as a writer for an organization that tracked money in politics. The linked reports largely correspond with my own look into SuperPAC and 501c4 spending:
https://www.prwatch.org/news/2012/11/11854/biggest-loser-2012-election-karl-rove
which is to say: this is money that is being ‘wasted’ in the sense that it doesn’t seem to be spent wisely and doesn’t actually seem to contribute to “wins.” As Scott mentions there is a lot of academic literature to suggest if there is a harm to this kind of money in politics, it isn’t something obvious like “it influences people to vote against their own self interest”…because it probably doesn’t have much of an effect.
If you want my own pet theory: lobbying (and election advertising) is a very small world, about a third of outside spending in the 2012 election moved through a handful of interconnected firms that are, sorta by definition, well placed to influence political campaigns
https://www.prwatch.org/news/2012/12/11868/where-did-all-those-super-pac-dollars-go-13-all-outside-money-moved-through-handf
My theory is that most lobbying and advertising power is actually directed at influencing politicians to make them want lobbyists or more appropriately to make them think they need to spend money on this stuff. This would also explain why most political advertising is…like…really awful from an advertising perspective. Most of their time and attention is spent on trying to secure more money, the actual content of the ads are an afterthought.
For all we know, the reason why C-rt-s Y-rv-n is banned from Google might be his work on Urbit, rather than his blogging.
That seemed like one of the least objectionable parts of the whole complaint. So what if Google doesn’t want CY on its facilities? I wouldn’t want him in my house either.
So what if Google doesn’t want CY on its facilities?
Fine. Then they have a publicly available list of “You cannot invite these people for lunch or to visit the premises” and everybody knows.
Not a private secret list only security get to see when a secret silent alarm(!) is triggered and they appear to escort you off the premises without any explanation.
They actually called it a “watchlist” – you know, like they’re the FBI or something keeping a list of terrorists. You may not like the guy, but until he’s genuinely tried to smuggle a bomb into Google, this kind of thing is very dodgy. Say straight out “These people are banned because we don’t like their politics” instead of having creepy ‘security swoops in to disappear you off the premises’ tactics.
Exactly. Explicitly say “the people on this list will, if appearing on the premises, be escorted away by Google security because we don’t like their politics.”
If you believe this, it’s only fair to let the general public (your user base) know whose side you’re on in the cold war.
This is written as someone who isn’t on his side (because I have actually never read anything he has written, to the best of my knowledge).
To start with, it’s reasonable that you should control what happens in “your house” because you feel a strong sense of attachment to it (which leads to bias), but this is usually supposed to be discouraged in corporations, perhaps because it leads to bias. It’s weird that Google takes it so personally, and the fact that they do points to the idea that they may discriminate in other ways, even if it’s not weird and objectionable on its own (which I do think it is, so I’m not shifting the goalposts, just setting up another pair).
To further establish the issue with what happened, consider that in your example you’re probably just not inviting CY yourself. But in this case he was already invited and thrown out. So if, say, a relative living at your house invited over CY, and you found him eating lunch, would you throw him out? Would you set up a security system so that if he ever walked in, you’d be alerted? (I can’t really make a perfect analogy here, because no one living in your house is likely to have the relationship of “employee” to you, but hopefully you get the idea anyways).
It’s weird, certainly. That security list is probably supposed to alert when someone who might actually be a threat (e.g. disgruntled former employees, Russian strippers) comes to campus. There seems to be no good reason for Yarvin to be on it.
Suppose I was really rich, I’m not but just suppose. And in order to keep my mansion in peak shape, I have a small staff. The staff spends a lot of time at my mansion, so I’m okay with them having people over for lunch. Why not? But there’s some people that if I knew they had been invited over, not only would I throw them out but I’d probably let the staff member go. If I’m being 100% honest CY may not be quite there for me, but if it was Jim, there’d be zero hesitance.
I agree that a big multinational corporation are supposed to be a little less — call it sensitive — than a person, but secret silent alarm aside* I just don’t see much objectionable about controlling access to campus in relatively arbitrary ways. There’s a certain amount of trust involved in having someone on campus–I’m not talking about stealing things necessarily but CY could plausibly visit and then write a blog post about how the googleplex is a den of degenerates.
I would differently if Google was looking up party registrations for each proposed visitor and excluding all those registered as Republicans. If that’s how you guys see it, then I guess I understand the outrage. But that’s not quite the same thing to my mind.
* Deiseach makes a decent point with respect to that. Since apparently you need to register a guest ahead of time, I don’t see why they didn’t just say no at that point.
I agree with this. The secret blacklist, silent alarm bit is the sketchy bit. If they were to say “as sovereigns of Google, we ban Curtis Yarvin from the premises!” I mean he can hardly complain can he? Having secret rules is bad, though.
Actually, now I come to think of it – how the heck do Google (or whoever is in charge of the Secret Ban List) know who Curtis Yarvin is?
I only know of him what you guys on here tell me, which is that under the name Mencius Moldbug he does political blogging of a very reactionary right-wing style, and as his own self Curtis Yarvin is doing something to do with programming or coding or whatever.
So while Googley Googler Ban Lister might know Yarvin, software developer, how would they know he’s also The Terrible Moldbug? Unless the person(s) running the list have, well, A Little List of people they deem Undesirables. I don’t know enough to know what happened – had Yarvin been invited to lunch there before by that Google employee, someone saw them, recognised The Wicked Blogger and complained to the Ban Master that Yarvin should be banned off the premises?
How does this list work? I think that’s the really interesting question here; it’s usual enough for security to be given a list of “kick these bums off the premises”, but as I said – how is the name “Curtis Yarvin” apparently a trigger for “Initiate Special Protocol for Dealing With Such Cases”? There seem to be several names of a ‘political’ nature on this list, if I believe all the articles online, and really I’m fascinated as to what the heck is going on there that people are guests of employees or visiting the Google campus that they need a watchlist (a term I had hitherto associated with “terrorist watchlist from FBI” and the like usage).
And what is that “protocol” as described in the excerpt from the screenshot below:
It’s not a _secret_ that he’s Moldbug, and hasn’t been for a long time. I don’t know when he was added to the watchlist, but it wasn’t discovered until after the Strange Loop and LambdaConf controversies made him fairly well known among Googlers. How he got on the list is an interesting question, but I doubt it will end up being deemed relevant, so it won’t come out.
Yeah, the twit isn’t totally clear on the timeline for security acting, but if it’s what it sounds like, where CY was on the campus eating lunch and then a couple of guards came up to tell him to leave, then Google security is extremely defective.
You don’t let somebody into the secure area if you aren’t sure. This is aside from the basic politeness of telling somebody they’re going to get turned around well prior to them showing up at you door, having spent time to get to your site.
@CatCube
They don’t want to put fences with checkpoints around their campus, of course.
Reactive policing is far weaker than pro-active policing, but it is not nothing.
I believe they don’t need to be registered ahead of time. There are kiosks for guests to sign in and pick up their visitor badges (with the employee who’s hosting them), and employees can either preregister their guests to save some time, or type in the details on the spot.
As for why they didn’t say no, well… all the kiosk knows is a name and possibly an email address; it’s a global company, so name collisions are likely; there are more kiosks than security desks, and people can have visitors at any time of the day/week, so making them find a human and sign in manually might be impractical; and the system may as well err on the side of trusting employees to know who to let in, since a rogue employee could just enter a fake name.
IS CY in some sense a competitor of Googles? Vanishingly slim chance he’s a threat to them, but they could reasonably have a policy of “no independent software developers without prior approval.”
A reasonable policy of “no independent software developers” would be “because we don’t want them hiring away our employees under our very noses” (though it might be more likely an indie developer was trying to get their pal in Google to steer them towards a job with Google), but then they should also ban head-hunters and recruiters and the like – if they do, fair enough, but that doesn’t seem to be why Yarvin was banned.
A policy of “no independent software developers without prior approval” enforced by silent alarm would be ridiculously unworkable at Google. There are a lot of friends and family of Google employees in that category, so security would have their hands full.
the list CY was on included Alex Jones so I don’t think it’s about competition, as such
Urbit, his main software project, is pretty ambitious and aspires to compete with a couple of Google properties — but I’d be astonished if that was the issue, mostly for the reasons that others have already given. Plus, the Google campus has well-defined secure and insecure zones (corresponding roughly to spaces where actual development takes place, and social or administrative spaces respectively) and you’re not supposed to take guests into the secure ones, so he wouldn’t have had the opportunity to engage in a little industrial espionage.
The most likely explanation for Curt Yarvin being on their “toss him off the campus” list is the fear of some kind of outrage-storm from social media claiming that this is proof that Google is signing onto whatever evil is ascribed to Yarvin.
It’s important to remember that whatever the SJW-ish leanings of Google management or rank-and-file techies, they’re also a huge target for outrage, lawsuits, boycotts, etc. They are probably extremely vulnerable to discrimination claims because they’re in an industry where the top performers are overwhelmingly Asian and white men, and they want to hire the top performers. Their entire business is ultimately about advertising, and online advertising is a business with a lot of potential to creep the public out via automated massive privacy violations. And so on. They have vast wealth and a huge amount of power, but it wouldn’t be so hard for them to find themselves so bogged down in investigations for discrimination and antitrust and privacy violations that they had all their dynamicism and innovation sucked right out.
(FWIW, his blog is still up — and it’s hosted by Blogger.)
Technically, Google’s a private organization and can ban anyone it wants.
And I say this as someone who kind of admires Moldbug.
I agree that private businesses can ban anyone. I’m saying “put the ban list up so people know who’s banned”.
Imagine you walk into an office building for a meeting, the receptionist checks your name, next thing you know the security guys are taking you by the arms in a discreet but firm grip and you’re being diverted into a back corridor to be sent out the back way, and nobody tells you why this is happening.
If you knew beforehand “people wearing white socks are banned”, you’d either have dressed differently or arranged to meet the person working there outside the building, right?
@Null42
Doesn’t that depend on the reason? Can a private organization ban all black people? I don’t think so. Depending on how that California law is interpreted, banning people purely for specific political views may be similarly unlawful.
Also, legal and moral are separate things. They could also have midget-throwing contests (except in their Florida and New York offices). Others can still judge an organization that does that.
Hah, I could definitely see a court arguing that banning Democrats is not allowed, because black people are all Democrats, and therefore this is obviously just an attempt to be racist (we’ve already seen courts employ this logic with Trump’s immigration restrictions)
I think we’re all fairly clear that Google would not try anything approaching a ban on black people or Democrats. Interesting thing would be to see if they would be willing to ban a black Republican local politician, or a very conservative person – what is that Scott’s links says about the new Libertarian Party leader? He’s a black guy? Anyone want to try inviting him to lunch in Google and see what happens*? 🙂
*I imagine it strongly depends on whether they think he’s a “gay rights and legal weed” Libertarian or a “no social safety net” Libertarian.
Yeah, it’s definitely a case-by-case basis thing.
Milo being gay hasn’t spared him the wrath of the SJW left at all.
To be fair, Milo almost certainly lasted a hell of a lot longer than a straight guy saying the same things (which were quite vile) would.
edit: I’m kind of understating things, as a big part of his shtick was explicitly based on exploiting the left’s blindspot for sexual minorities.
I think a private club could, at least in some states.
IANAL, but iirc a lot of the Civil Rights laws apply to businesses open to the public, which leaves a lot of room for other kinds of organizations.
I know at one point in time this was an actual issue, but I haven’t checked recently so it may have changed since.
Often, if a policy gathers enough social opprobrium, there’s a tendency to assume it must be illegal just because it seems terrible and no one’s doing it. This is especially true if it’s similar to other things that *are* illegal.
Re: Lobbying, this is the key quote to start with
This is what you would expect if a form of the EMH was mostly correct, and also is a major shot against those that hypothesize things like “lobbying cancels out lobbying so everyone does it, but no measurable net benefit”.
Really basic economic analysis indicates that money will be drawn into any sector that has out sized gains (ie greater than the average), and that money will reduce the rate of return until it approaches the market average. If lobbying benefited companies more than the average investment you would expect lobbying to increase until it drove the return down. At that point you have a company that should be indifferent between investing the money in some other way and lobbying as the equilibrium. This doesn’t imply that lobbying is ineffective, it implies that more lobbying would be ineffective.
Thanks for that post. This has never occurred to me before, even though it’s simple.
I am holding my breath for a SSC dive into answering the lobbying question…
I thought that at first as well, but I’m not sure that’s what the article says. It’s hard to figure out exactly what they mean, but either your interpretation is wrong or the article doesn’t mean what everyone thinks.
Seriously, read that Alice Maz Minecraft piece.
What I found interesting is that the patch notes thing actually works even with clearly pretty dedicated and knowledgeable players.
I distinctly remember, as a child, reading Runescape patch notes about an upcoming update to the Ring of Wealth (a semi-rare item) and thinking “Huh, maybe I should stock up on those”, but concluded that I couldn’t be the only person who’d think of that.
Sure enough, the update came out and they skyrocketed in price.
I suppose in retrospect that most Runescape players have no idea how that kind of thing works – or rather, don’t think to apply it to the game – and so it’s very easy to play the economy.
I do credit Runescape with getting me interested in economics. The scams were if anything the least interesting part of it (though my favourite one was the one where you’d sell an easy-but-annoying-to-acquire item, ideally one necessary for a quest, at an insanely inflated price on the opposite side of a bank from someone offering to buy it at an even more insanely inflated price, then wait for someone to try to buy it off you and sell it back. Then you both leave and split the profit.)
I played WoW up until Mists (and quit because fuck Pandas) and in Lich King and Cata I was one of if not the richest person on my server. I had almost a million gold spread across my characters while most people were walking around with ~5k or less. I did it (with the help of some mods) by buying low and selling high. On Saturdays when nobody was raiding and people were farming, buy cheap gems/herbs. Turn into glyphs and cut gems and post them and on Tuesday night when everybody gets a new Enchanted Long Staff of Pwnage or whatever they rush to the auction house to get gems for it. So I could buy, say, and Inferno Ruby for 50-70g on a weekend and sell it on a Tuesday as a Brilliant Inferno Ruby for north of a 150-200g. And of course I did this with lots of gems, lots of different cuts, lots of glyphs. I would frequently curse that if only I were half as good at the stock market as I was at the WoW auction house I’d be loaded.
> Implying the stock markets are only twice as efficient as the WoW auction house
The list of MMORPGs I’ve bought and tried is absurdly embarrassing. The ones that I gave serious time to – say, more than 20 hours – were Everquest, City of Heroes, and World of Warcraft. Maybe Lord of the Rings Online, which was probably the best of the bunch. Maybe something else I’m forgetting.
I stopped playing any of them years and years ago, though. I’m a, er, “video game enthusiast,” but the amount of time you have to pump into these games to get past the first levels is insane – not just playing the games, but forming the relationships, attaching yourself to a decent guild, and being an active and helpful member. I never once was able to do any of that.
After reading the Alice Maz article, though, I’m sorely tempted to try again.
Still, I remember how much fun Everquest was when it first launched, and nobody really knew what to expect. That game, at the time, was difficult in all the right ways.
You might be surprised by how low the bar is these days in at least WoW.
I’ve got raiders that show up for raid time and I basically never see them past that, and even they’re in a guild with a scheduled raid team. Based on how difficult recruitment is these days, even that isn’t super popular.
If you’re good at whatever you do and occasionally want to do stuff, people will be pretty happy to see you. Anything beyond that is gravy.
An embarrassingly huge portion of my internalized economics knowledge comes from Runescape (“Inflation? Oh that’s like when there’s no currency sinks and everyone has a bunch of GP.”)
For what’s it worth, the RS07 market seems to respond somewhat intelligently to patch notes, probably because of the older and more competitive playerbase.
When she mentioned how she outsmarted FF6, I *immediately* knew what she was talking about–I remember doing the exact same thing as a kid. Interesting how isolated experiences tend to converge!
Yeah, that’s pretty low-level optimization. LOTS of middle schoolers independently figured that one out.
Of course the major flaw is that you only can powerlevel Terra and Edgar this way, right? Not your whole party.
I *think* it’s Sabin too. It’s been awhile, but I seem to remember his Blitzes trivializing a lot of stuff early on. I’ve got one of those SNES classics–looking forward to replaying it soon!
And Sabin (like sov said), but every subsequent character that joins has a starting level based on your party average. So as long as you’re not trying to obsessively optimize with esper leveling bonuses this’ll pretty handily trivialize the rest of the game.
The major flaw is that powerleveling characters there means you can’t take advantage of the levelup stat buffs you get from Espers, which gimps them later on. And you need to grind battles anyway to learn spells.
I’d say the major flaw is that if you advance to the point where the rest of the game can be beaten by holding down the a button, you might as well just read a plot synopsis and call it done.
Of course, I’ve sketched Gau, so I’m speaking as a convert.
Agreed. I figured out that river exploit as well, but then I just smirked and kept playing normally. If I wanted to basically skip all the combat in the game, I’d just patch the ROM to skip it or whatever, but then what’s the point of playing at all ?
It was great. I really wish I had the time right now to play on a server like that, one that has an actual economy.
I wonder whether she has ever learned to play Dwarf Fortress.
It’s the ultimate source of mechanics with weird edge-cases to exploit.
For example: Zombies are stronger than regular animals. Stronger = more muscle mass. Thus zombifying and re-killing carcasses increases meat output by 40%.
I never knew that about Dwarf Fortress zombies, but within the context of the game it’s perfectly logical.
I’ve always wanted to try EVE Online after reading about some of the heists, etc. on it; I don’t think I’d find an MMO compatible with a day job, though, so I’ve never opened an account.
Yeah. I’d get into Eve, but there are too many distractions in my life as it is.
Eve Online is the MMO I wish I were dedicated enough to play.
Like, everyone someone talks about Eve, I realize that game is the perfect game for the idealized version of me I’d like to be.
If I win the lottery, or retire, or get fired, I will give EVE Online a shot – it looks like a blast that absolutely requires you to put in twenty hours a week.
Exactly.
The thing to remember about EVE online is that unlike Minecraft servers its full of people who play the market. Also the game would be about 1000 times better if it could get ~10x as many players and maybe expand nullsec. As it is right now the game is on a large decline. I actually just quit last month for this reason. It was the first month I successfully paid for my 36 accounts/108 characters purely with PLEX. The previous 2 months I did cash or half/half or so. I was pretty sad cause that was a landmark achievement for me and I was pretty set to keep that up as well as become quite wealthy. Something like 5 tril after a year or two. But there were mechanically unrelated reasons as well as life reasons I had to defeat the sunk cost fallacy and quit. My only true regret is how it went down with my corp but what can you do? Some of them are cool guys but real life and rational decision making sometimes result in unfortunate outcomes.
Though the game itself usually gets the edge with the edge cases. Like when Toady made combat less about hitpoints and more about severing limbs/blood loss/brain damage… except sponges had none of these things and were immune to blunt damage, so giant sponges suddenly became the strongest 1v1 duelists in the game. “Without central nervous systems, the only thing they can feel is ANGER.”
Can we get a tl;dr?
It really is more fun if you read the whole thing, but my attempt:
It’s about understanding game economies and using that understand to exploit the market and get (in-game) rich. In particular the writer describes some of her most exciting feats on her Minecraft server.
I haven’t had a chance to read the article yet, but could this be a case of “everyone feels like they have to lobby because everyone else lobbies, but the net effect for the average company is driven to 0 because everyone does it, and anyone who didn’t lobby would lose out.”
I think it’s more like differential effectiveness for positive and negative lobbying – it’s very well supported in the social science literature that lobbying against [policy] is vastly more impactful than lobbying for [policy], as well as a lot more accessible to people who aren’t giant corporations; end result, it’s very difficult to get any specific changes put into law if even a comparatively small number of people are against it. And there are always at least a small number of people against anything.
I feel like I ought to point out that the authors of that sexual orientation study are perfectly aware that the AI was identifying (at least partially) cultural correlates, not something deeper – they say so fairly explicitly in their authors’ notes and in their response to some criticism from September.
Re the Bloomberg article, you should really read the book “Lobbying and Policy Change” by U Chicago Press.
It will give you a much better sense of how lobbying functions as well as a better sense of the difficulties in studying this area and why many studies have a hard time finding an effect. Essentially, once you read this book, you will have the tool kit to spot the methodological flaws in studies like the Bloomberg study and then you will no longer have fits of doubt regarding all human knowledge when you encounter them.
The Democratic Party will go away in 30 years. It won’t exist. It’ll be the Libertarian Party.
Hm – the new merged party may require a new name.
Something combining elements of both original party names to maintain continuity while pointing the way forward. Something like, maybe, the Liberal Democrats? 😉
Also: a commenter suggests an inside story in which the Damore memo was allowed to blow up because of office politics among top Google leadership.
If this whole mess blew up from the original storm in a teacup because two higher-ups were too busy playing “Daddy loves me more” “No, Daddy loves me!” instead of attending to business, then Google has more problems than it’s letting on. The Damore lawsuit is interesting because of his co-plaintiff, who from what little I’ve read of the brief seems to have an entire armoury of axes to grind with Google, and is lobbing into court such delights as screenshots of talks hosted by Google by employees identifying as “yellow-scaled wingless dragonkin” and “an expansive ornate building” under the heading of Living As Multiple Beings. I believe this is to bolster his complaint about “but when I said I think a family should be a mom and dad and their kids, I was the weirdo getting lectured by HR?”
I’d take it as a sign that Google is just dominant enough and doing well enough that ambitious executives have to resort to this matter for their corporate intrigues – not enough actual screwups and problems that you could leverage to throw your rivals under the bus.
Yeah, but that’s not just “two middle managers trying to jockey for a promotion”, that’s the feckin’ CEO of Google and the CEO of Youtube having a tussle. At that level, backstabbing and taking your eye off the ball has serious consequences, unless the Founders really have all the power and a title like CEO of Google just means “hired help”.
Well, Youtube isn’t a direct subsidiary of Alphabet, but a subsidiary of Google, so that would be the good old “backstabbing your boss” scenario.
Page and Brin are the top executives of the publicly traded parent company, where all the power and money resides, so I can see a setup where the CEOs of their subsidiaries and their subsidiaries’ subsidiaries are essentially glorified middle management.
I do know Page and Brin are the Shah(s) of Shahs, but really if CEO of Google is the equivalent of “department head” – well, even if that’s true, two department heads playing politics in this manner have resulted in a stupid blow-up of something that should ever only have been an internal disciplinary manner, and were I Shah of Shahs, I’d be giving the pair of them the axe for playing silly buggers with my property in this fashion (though seemingly it’s complicated by one of the parties going way back with the original Founders including being an in-law, so that’s not likely to happen).
“I want to make him look bad because I think I should have got the job instead” is, alas, indeed office politics, but when it helps make the company look bad, then you’re not doing your job as a professional.
My impression is that the title of CEO of Google means something like “hand of the king”: You run the day to day stuff while King Robert goes off having fun, except he can step back in to overrule you anytime he feels like it.
> an expansive ornate building
That sounds more like the results of ‘build a metaphor for yourself’ than an actual personal identity.
It does seem much more self-affirming than believing that you are a tiny plain shed.
Ordinarily it’s just the kind of eye-rolling “well of course people like this would be working for a liberal, progressively-inclined company in that area” and wouldn’t attract any more notice than that, but it does help support the case that if Google can tolerate the “anywhere else, they’d be sent to a psychiatrist” kind of staff, what was the big deal over someone holding traditional “husband and wife roles in a marriage” views?
Mind you, I do think the co-plaintiff is a bit of a pill who was objectionable in how he phrased his views, but disciplining him for holding minority views while letting the wackadoodles give talks advertised to staff as something they might like to attend does sound like unfair treatment. “All weird opinions are equal, but some are more equal than others”, is it?
The filing claims that those were not tolerated, but IIRC the example given was a passive-aggressive email from HR to someone who said “If I had a child, I would teach him/her traditional gender roles and patriarchy from a very young age. That’s the hardest thing to fix later, and our degenerate society constantly pushes the wrong message.” sounds more like edgelordery than traditionalism to me.
sounds more like edgelordery than traditionalism to me
Well, that’s why I said the guy sounded like a bit of a pill, but the HR response was also a bit Orwellian in its tone: “we’re sure you did not intend to say what some people interpreted you as saying”.
Very strongly hinting at “there’s a Right Opinion and a Wrong Opinion on these matters, and if you have any sense, you’ll stick to the Right Opinion as held round these parts”. They could have said “That’s a bad tone and bad communications, don’t use it in any more messages” rather than “you should think Right Think not Crime Think”.
If I say “A mother and a father are different persons with different roles in the family”, I damn well do mean to say what some people may interpret me as saying; let them argue with me over it, but unless any of us are calling for death to the enemy, HR should keep out of it.
The thing about parties called “Liberal Democrats” is that they vary so much from each other. There’s the Russian party of that name, Vladimir Zhirinovsky’s party, whose main purpose in existence seems to be to make Putin look moderate in comparison. There’s the one in Japan, where wikipedia doesn’t know whether to call it “centre-right” or “right wing”. There’s the one in the UK, which seems to be many things to many people, in particular somewhere for people who might normally be Labour types to go to if they don’t like the current Labour party or leadership (whether they think it’s too far to the left or to the right, doesn’t matter) – I mean one of the parties from the merger that created it was a Labour breakaway group.
“. . . such delights as screenshots of talks hosted by Google by employees identifying as “yellow-scaled wingless dragonkin” and “an expansive ornate building” under the heading of Living As Multiple Beings.”
I have to wonder if these guys are actually trolling their own management/corporate culture.
Probably not, unfortunately. I don’t remember the dragonkin one, but there was at least one incident where someone who claimed to be “multiple” wrote a Google Doc explaining why “they” were quitting, claiming all sorts of mistreatment because people didn’t treat “them” right. And a whole lot of others piling on about how horrible all these people mistreating “them” were. The elephant-kin in the room — that this person had some sort of serious mental illness and perhaps should not be taken at face value — was never mentioned. The idea that perhaps it’s not everyone else’s fault when they refuse to buy into a person’s mental illness… anathema, not welcoming and inclusive.
someone who claimed to be “multiple” wrote a Google Doc explaining why “they” were quitting, claiming all sorts of mistreatment because people didn’t treat “them” right
Well, that puts “I was cruelly misgendered!” in the ha’penny place; if you’re like the old sailor in this verse, how is anyone supposed to be able to tell at any one time which of you (multiple) is interacting with them? A co-worker starts off talking with Bob, then Alice, then they answer a phone call and when they come back it’s Darren who is very upset that their colleague didn’t address him by the right name all along!
So, “I couldn’t give a ha’penny jeers for your internet-assembled personalities”?
The Deliberates?
Regarding the drug-legalization experiment, Trump’s comment about Norway being an example of a non-s***hole country, along with the accompanying backlash from the Left, is becoming more and more ironic.
Can I ask for more details on why you would consider this to be a move towards s***hole country status? Is it because they are decriminalizing* but not legalising?
(‘Decriminalisation’ and ‘legalisation’ mean two quite distinct things in drug policy circles: ‘decriminalisation’ usually means that personal possession and use is not a criminal offence, but production and sale still is; ‘legalisation’ usually means that the production and sale is also brought under a legal regulatory framework – it’s the difference between Portugal and Uruguay with regards to cannabis – and decriminalisation is sometimes thought of as a ‘worst of both worlds’ situation, where you still have all of the harms that result from the drugs being produced and sold by the unregulated criminal market, plus the multiplier from there now being more people likely to buy them since you’ve removed the legal disincentive)
*Actually, from reading the article, it’s not clear that they even are fully decriminalising, in the sense of “the government will do to you for owning and using, say, cannabis or MDMA, nothing worse than it would do to you for owning, say, alcohol, tomatoes or plywood” – it’s not very clear but it sounds like they may just be removing criminal penalties as the default first response to someone using a drug, and mandating coercive treatment with the possibility of criminal penalties if you don’t cooperate with the treatment.
I didn’t read the linked article, and my comment was only meant to reflect on my simplistic impression that Norway is moving in (what is typically considered to be) a very progressive direction. I was alluding to the fact that Trump’s party and Trump’s cabinet AFAICT has generally disavowed progressivism, in particular with regard to drugs, but also with regard to social safety nets and government-run healthcare which characterize how things work in Norway. Therefore, Trump using Norway to exemplify a “good” country seems ironic, as does Trump’s left-wing critics’ assumption that this is about Norwegian people being almost all white when liberals seem otherwise happy to bring up how nice things are in Norway (with its social and fiscal liberalism) all the time.
Of course, to be fair, this is all confused by the fact that terms like “s***hole” usually refer to a bad/difficult place to live rather than a place containing bad people, yet Trump (perhaps being confused on the distinction himself) would seem to be using it in the latter sense in the context of discussing immigration policies. (And of course his comment was moronic for other reasons as well, but that’s nothing new.)
Norway also has the lucky combination of good institutions and oil.
—
So I think attributing Norway being great to Norway being very progressive is sort of misleading. It’s very easy to succeed if you have this much oil, and enough institutions to prevent looting, regardless of whether progressive or conservative cultures dominate discourse.
—
Trump seems pretty obsessed with oil, actually. I think at one point he was suggesting on his vlog (did you know he had a vlog? It is amazing.) US intervene in ?Libya? on “humanitarian grounds,” but then demand oil as payment.
It is not actually that easy to succeed with oil, tho Norway seems to have managed itself very well. See, the resource curse.
The resource curse is….. non rigorous in most of the cases that it is used. The US was blessed with massive oil reserves and was barely a net importer of crude oil until the 1950s. It is a combination of resource and institutional wealth that matters (with no true Scotsman possibilities since we don’t understand institutional wealth nearly as well).
Venezuela.
Venezuela worked pretty hard to fail.
^ and they are not done yet.
On Justice Force Crater: From 2008 to 2013, the most senior judge in England and Wales was Lord Justice Judge.
I read the lemon mafia thing a decade ago. Maybe the scurvy connection is new.
Does anyone have an ungated copy of the lobbying study? I expect that the Bloomberg write up is probably accurate given that it’s Cowen who wrote it, but I’d still appreciate being able to read their results in more detail.
In the interest of teaching people to fish, rather than handing out fish, please read this comment on reddit.
cheers
“New study finds more evidence that small class size improves test scores.”
My impression is that private schools generally spend their higher funds per student more on smaller class sizes than on, say, higher quality teachers. But I could be wrong. This would be an interesting subject to research.
My son won a scholarship to an extremely good (i.e., extremely well-funded) private high school in part due to them putting a high value on classroom participation because they only had 15 in a class.
In general, I think education research should pay more attention to private schools. Generally speaking, it’s hard to prove statistically that anything works in public schools, but studying private schools might be helpful. My impression is that private schools tend to believe in small class sizes, and the people running the best private schools aren’t stupid.
AFAIK, studies show a small negative effect of small class sizes for public schools, presumably because it means that poorer teachers are hired to teach the extra classes.
More top tier teachers may be willing to work at private schools, than they have teacher spots, in which case they won’t have the same issue. But then their experiences cannot be replicated by public schools.
I don’t think anyone is suggesting that bad teachers won’t do a good job.
The idea is that as class size increase, the ability bar for “better than bad” rises quickly.
Could just be that parents believe in small class sizes, and they might be stupid.
It’s not so much class sizes as class disruption; if the teacher is spending the majority of the lesson dealing with Tommy who won’t sit down, hasn’t his books, is talking and distracting the other students and so on, not much teaching will get done.
Parents may well think “school with smaller class sizes = school less likely to have a lot of Tommies to deal with = my kid can get taught in class rather than the teacher doing Tommy-wrangling”, either because the school can expel Tommy and the public school has to take him, or that Tommy won’t get in there in the first instance.
Regarding the Hindu temple: according to this wikipedia list, with 162acres/655Km² this would be the largest functioning Hindu temple, the famous Angkor Wat in Cambodia still being larger. Though, these temples are very different some include commercial areas, markets, ponds, gardens (which makes them fascinating to visit). Perhaps measuring the area of the buildings would make more sense for this list!
I actually in real life failed a required college class because I forgot to ever attend, AMA
(It was a required PE elective that lasted half a semester. The second half. And there was no system to remind students of these classes other than maybe having mid-terms in other ones. University policy mandated an automatic withdrawal due to non-attendance; I was failed anyway.)
Enjoy your dreams.
What I got from the Bonobo-jerk writeup was that Bonobos prefer to take food that is associated with the image of the jerk over food associated with the image of the nice person. I don’t understand why that is interpreted as “Bonobo prefers jerk” to “Bonobo prefers taking things away from jerks”. It seems like it would be depend on whether, to a Bonobo, taking something from someone is seen as cavorting or punishing.
I had the same thought while reading it and was surprised the researchers didn’t try to come up with some way of showing which of the two was at work.
ETA: They could have, for example, had the bonobos choose which person to give something to or which person to “punish” with a spray of water or something.
Note that in the real-life implementation of the Trolley Problem, per the actual LA Times article cited, nobody was seriously hurt
Also, the people who threw the switch at least claim that they didn’t even expect that much.
Since pretty much all ethical systems other than consequentialism recognize a qualitative difference between endangering human life(*) and knowingly causing death, this is a pretty weak example of a real-life Trolley Problem. The Trolley Gods will not be satisfied with such meager sacrifices.
* Yes, even when the danger amounts to p>0.5 of death and/or >1.0 expected fatal casualties
I hadn’t heard about this incident, so I looked up the NTSB report for it.
TLDR: Another instance of, “You can get away with violating safety rules until one day you don’t.”
Crews had been violating rules about setting brakes. When they took cars off one train and put them on another, they were supposed to set handbrakes per operating rules (the handwheels you see on railroad cars; these tighten the brake rigging mechanically and hold the brakes on with a ratchet mechanism, because air will eventually leak out and release brakes if no locomotive is keeping air supplied). However, this takes a long time and is tedious, so crews would just pull the first locomotive away and rely on the airbrakes going into an emergency application, where air pressure keeps the brakes applied. They’d use the air to hold the cars until another locomotive was attached. However, here they started bleeding the air off the brakes and forgot that they had no handbrakes set.
Air leaking releases the brakes? I thought all trains used brakes where pressure was needed to release the brakes, so that once detached they would come to a halt, and I thought that had been totally standard and required for like 70 years…
Pressure from the locomotive is needed to release the brakes, but unlike truck brakes where the emergency brake is applied via a spring (and is weaker than service brakes), train brakes use local compressed air for their emergency braking (which is actually stronger than service brakes). But if you bleed off the local compressed air — which they did intentionally — no brakes for you.
It doesn’t say in the story, but I think that once train brakes have gone into emergency application, they have to be bled manually; simply reconnecting the brakes to the locomotive and re-pressurizing is not sufficient. So what they were expecting to do is to disconnect the locomotive, connect one to the other end, bleed the brakes, and move the cars with the other locomotive. What they were supposed to do is set the handbrake before disconnecting the locomotive. What they actually did is neither connect another locomotive nor set the handbrake before bleeding the brakes.
Incidentally, there existed one biologist who Marx held in even higher esteem than Darwin: an obscure architect and photographer named Pierre Trémaux, whose deservedly forgotten theories of racial variation Marx once described as constituting “a very important advance over Darwin”. Specifically, Trémaux expounded a literal rendition of the magic dirt theory.
From “In the Interests of Civilization”: Marxist Views of Race and Culture in the Nineteenth Century,
From Marx’s letter to Engels,
Engels, to his credit, was far more scientifically grounded and sought to dampen Marx’s fondness for this obvious pseudoscience. From his reply to Marx,
Thanks. Fun stuff.
Much as we joke about Magic Dirt vs. Tragic Dirt, differences in soil quality really do play a role in the world. Tanzania has lousier soil on average than Belgium. Indeed, a lot of the tropical world has fairly poor soil. A combination of geologically old terrain and torrential tropic rains tend to lower the crop-growing ability of soils (except in river deltas).
Very poor soil tends to result in nomadic tribes with animals, rather than farms. Some people have argued that farms were an important step towards greater civilization.
Rep. Steve King (R-Iowa) represents Sioux County, Iowa, which comes in very high ranking on a number of positive metrics, such as best upward mobility for working class children in Raj Chetty’s study, also has the best soil in the Midwest, as measured by highest price paid for farmland (up to $20,000 per acre in 2013).
It’s amazing how large a gap of actual mechanism one can paper over by brazenly employing the words “it follows”.
People have noticed, and they are trying. Maybe someone thinks he could be a good case.
https://www.nytimes.com/2016/06/24/us/politics/supreme-court-affirmative-action-university-of-texas.html
http://www.newsweek.com/affirmative-action-abby-fisher-lawsuits-646010
The commentary about the Ta-Nehisi Coates link seems pretty wrong. Maybe there are tweets somewhere accusing Coates of white supremacy, but they’re not mentioned at the link. The tweets shown there and the original criticism they’re quoting do not accuse Coates of white supremacy. They accuse him of blowing white supremacy out of proportion and taking an overly simplistic view of the world where everything can be reduced to a black/white tribal conflict.
No, this is also incorrect. That’s Richard Spencer’s take, who is a moron. Cornel West wrote an article in The Guardian which was frankly pretty embarassingly bad, accusing Coates of being a neoliberal shill, one, and two being inadequately invested in the black freedom struggle and not putting enough focus on possible solutions and heroic figures vs. just describing the problems and history of white supremacy. This is mostly because Cornell West is an old man with a hyper-fragile ego and he has various grudges against Coates. Anyway because of the fracas this kicked up – and because many people will still side with Cornel West and/or leftier-than-thou-takes and/or against people they perceive as Obama fanboys – he left Twitter again.
What’s wrong with West’s critique? His point seems to be “White supremacy is Coates’ Great Satan which he blames for everything wrong in the world. In doing so he implicitly absolves all other systems of wrongdoing.” West picks his hobby-horse of capitalism and calls Coates a neoliberal, but plug in whatever system you personally think is the real wrongdoer: isn’t Coates minimizing its evils?
West was extremely sloppy with his specifics, and most of the things he accused Coates of ignoring (drone strikes, for instance, or patriarchy) were in fact issues he’d written and blogged about multiple times. I’m sure it’s always possible under West’s worldview to say one isn’t putting enough emphasis on this or that, but that would make the pointless concern trolling too obvious.
My impression is that grudge holding is sort of West’s thing. Doesn’t really have anything to do with Coates.
I read West’s piece last week. Has anyone written a good rebuttal of it?
This hits a couple of the big points. This Vox article is a pretty good summary.
Well that was interesting. I can’t really take sides because I disagree with both of their world views. They’re arguing over who’s better confronting white supremacy when I don’t believe white supremacy exists. White supremacist nations don’t have diversity lottery immigration systems, don’t give non-whites political power, don’t have legally sanctioned programs discriminating against white people and in favor of non-whites for university admissions or jobs, etc. And if you ask actual white supremacists, those are people like Richard Spencer who want a white ethnostate specifically because we don’t have white supremacy and they want it. I’m clearly living in a different world than West and Coates.
But one thing I found very interesting, from The Stranger article and quoted in the Vox article:
When my liberal friends have said Obama was the best president of their lives, I think they’re telling the truth. But Mudede seems to think that when conservatives criticize Obama for setting the Middle East on fire, driving healthcare costs through the roof with Obamacare, Title IX insanity, unconstitutional executive orders, etc etc, they’re lying. In our heart of hearts we love the way Obama governed and are just mad he was black. This is a definite failure in modeling the outgroup.
Modern-day America is not, on paper, a white supremacist state. You don’t have to go all that far in the past, though, to find an America that deserves that label. (The Civil Rights Act is the obvious example; as a less well-known example, consider red-lining.) That doesn’t disappear overnight. You may not have white supremacy now, but you are living with its legacy. You could say that America is post-white-supremacist: no longer officially discriminatory, but still working through the aftermath of a time when it was.
(At this point it is traditional to point out that black people have an incentive to overestimate the effect of past injustices on modern problems. This is true, but goes both ways: there’s just as much incentive for white people to underestimate it. It’s a lot easier to point out bias in others than to grapple with your own.)
Note that Mudede’s article doesn’t say anything about “conservatives”. You filled that in yourself. Mudede just says “white supremacists” and “many whites”. Even if you disagree with his assessment of how many white supremacists there are, the underlying argument still holds: people who are invested in white supremacy are going to have a natural bias towards seeing Obama’s actions negatively.
There’s lots of evidence to show that people assess policy along partisan lines. See, for example, how perceptions of the economy flip-flopped from October 2016 to January 2017. Similarly:
– if you are upset about Obama’s handling of Libya and Syria, but not about Bush’s handling of Afghanistan and Iraq, you might be seeing policy through a partisan lens.
– if you are upset about Obama politicizing the DOJ, but not about Bush or Trump doing the same thing, you might be seeing policy through a partisan lens.
Obviously, “partisan” doesn’t equal “racist”. But it also doesn’t disprove it. The real answer is somewhere in the middle: some conservatives disliked Obama on pure policy grounds, while others (say, this guy) were also influenced by race. Your disagreement here is a quantitative one, not a qualitative one.
Mudede made two statements about who was upset about “Good Negro Government.” One was “white supremacists” and the other was “many whites.” Are these the same people, or not?
And yes, I’m very unhappy with the Bush administration. I’m a conservative or paleoconservative, not a neoconservative. Bush/Cheney made me basically stop identifying with the Republican party. What I like is small, decentralized government, and Cheney said “the era of small government is over.” What I wanted was the elimination of the department of education, and instead got No Child Left Behind. What I wanted was government out of healthcare and instead got Medicare Part D. What I wanted was the end of the welfare state and instead I got “faith-based initiatives.” What I wanted was a republic, not an empire, and instead I got wars of choice started under false pretenses. I was not at all happy with the presidency of George W. Bush and it had nothing to do with his race. Similarly, I was not at all happy with the Obama presidency, and that also had nothing to do with his race.
Still, I think it’s interesting that Mudede thinks that many whites (or perhaps only white supremacists?) secretly liked Obama’s administration, and are lying about it to cover up the fact that all the good stuff they secretly like was done by a black man. No, I think people who don’t like the Obama administration’s governance genuinely don’t like his policies and actions irrespective of his race, and the people who don’t like Obama because of his race don’t secretly think he did a great job and are lying about it because it was done by a negro.
If you and I and Mudede were all asked to predict what percentage of white people were white supremacists, I’m sure we’d all have different numbers. My point is simply that, once you accept that some white supremacists exist, Mudede’s argument stands: those people are going to be very motivated (consciously or not) to see Obama’s presidency through a negative lens.
I take you at your word that your opposition to Obama had no racial component. But you are just one person, and it is not safe to assume that everybody else opposed Obama for the same reasons as you. It is, of course, equally wrong to assume that everybody else opposed Obama for racist reasons, and I hope it is clear that I am not arguing for that position. My goal here is merely to convince you that Mudede’s position is not necessarily incompatible with your experience. It does not explain everything, but it is a piece of the puzzle.
Edit to add: I think it is wrong to interpret Mudede as accusing people of lying about liking Obama’s policies. What he’s saying is this: if you are a white supremacist, you believe that black people should not be in charge. If a black person in charge did a good job, that would be evidence against your belief. People don’t like it when their beliefs are challenged, and they are motivated to explain away such challenges. Therefore, white supremacists will look for reasons to oppose Obama’s policies, even if they would have let them slide under a white president.
The mechanism shouldn’t be controversial. It is not uncommon to see people here argue that [the left / Democrats / the mainstream media] are opposing Trump for actions that would have been acceptable from members of the in-group. As I mentioned in my previous post, we have plenty of evidence of this playing out in (for example) economic polling. Really, the only question is: of the people with kneejerk reactions to Obama, how many were caused by him being a Democrat, and how many were caused by him being black?
We can quibble about the proportions, but once you accept that framing, you’re like 80% of the way to agreeing with Mudede.
It is not fair to extrapolate some white racists felt a certain way to using it to defend the position. The quote has the word “many” in it, and the overall tenor is about America, not about a specific sub population of the US, with its relative size and importance given. I have never identified as a conservative, but my reading of these pieces has been that they are in general painting a large portion of the population in a negative light without explicitly doing so.
All right then. I’m curious as to who, specifically, Mudede was talking about when he said “Nevertheless, many whites saw his presidency as an abomination: a black man governing the US well by conventional standards. *snip* It’s not so much about a fear of a black planet, but the fear of Good Negro Government.”
I would think these whites would have to be some large, representative group to have any impact at all. It would be like me saying “Whenever capitalists show they can improve the standard of living for workers, communists freak out and do everything they can to bury the evidence of ‘Good Capitalist Government.’ Nevertheless, many people saw the Trump presidency as an abomination: a capitalist improving the standard of living for workers. It’s not so much about a fear of a Trump presidency, but the fear of Good Capitalist Government.”
Yes, there are a handful of literal communists in the US (probably more than there are literal white supremacists) who would rather see the economy tank so badly that it would increase class consciousness to the point of sparking Glorious Revolution, but it strongly implies that I think the real reason Democrats (the people who don’t like Trump) say they don’t like Trump is because they’re filthy commies jealous of his success running the government and stimulating the economy, and not because of their clearly articulated positions that they hate Trump because they think he’s a racist, sexist buffoon/fascist.
So if I could ask Mudede a question it would be: “who specifically were you thinking of that has a fear of Good Negro Government?” If it’s just people like Richard Spencer, then why bother mentioning it, because who cares what Richard Spencer thinks? He has no power or influence and his kind is rarer in America than literal communists. If it’s people beyond that, and you’re talking about Sean Hannity or National Review or Ann Coulter, then you’re deranged because all of these people have clearly articulated ideological differences with Obama, and the idea they think he actually did a great job and they’re all just lying about it because they’re scared of Good Negro Government is ludicrous.
It all starts sounding like Arguments From My Opponent Believes Something.
Edit: I think I meant to link the Fetal Attraction essay about how people will frame bogus argument about what my opponent really believes. “Many whites” don’t really dislike Obama’s policies, they’re just scared of a black person demonstrating he can govern well.
Edit to respond to your edit:
I don’t think that’s the right model of modern white supremacists or nationalists. I don’t think Richard Spencer believes that no black person could manage the government, or be smart or accomplished in something. I’m pretty sure he knows Ben Carson and Thomas Sowell exist. They think they have different interests that are incompatible and should live separately and be self-governed separately.
I suppose that might just be white nationalists and not white supremacists, but I have no idea who these people are. That someone thinks blacks are so inferior that no black person would be able to do a good job as president is extreme beyond even Richard Spencer and David Duke. Perhaps Anglin, the Daily Stormer guy thinks that? But why Mudede or Coates would be bothering to engage with that extremely tiny fringe minority when discussing race in America I don’t know, unless their perceptions are very warped. But then again, I do think their perceptions are very warped. As I’ve said they see a white supremacist (not post white supremacist) nation that I don’t see.
If the group is small enough then, regardless of its narrow correctness, the argument is a non-sequitur as far as policy or the fuzzier “national conversation relevance” criteria. See also: radical Muslims
i’m not saying this rephrasing removes all the issues people have with the original claim, but it certainly removes a lot of them; it can’t be used to bash basically everyone alive today and it focuses on real and tangible things over feelings and beliefs about the beliefs of others. It’s still vulnerable to over-exaggeration of the current harm caused by the past harms, which you’ve already talked about, but it’s a huge improvement from a lot of the rhetoric of Coates and his ideological brethren.
Yes, the rhetoric I hear is “confronting white supremacy” or “smashing white supremacy” or “ending white supremacy.” They’re talking about current and future action. It strongly suggests they believe white supremacy is an ongoing thing and I don’t see that at all.
There are different definitions of white supremacy. It’s one of those things where a popular meaning – a “white supremacist” is someone who wants to keep anyone not white (with what “white” is varying) down through laws or extra-legal violence or some combination, for example. There’s also a meaning assigned to it, usually by someone with a certain sort of social sciences background, which is far more loose. It’s under the former meaning that someone can call Justin Trudeau a “white supremacist” and mean it. It’s a bit more obscure version of how “racist” has several different meanings.
Personally, I don’t like this habit that has emerged in the social sciences – it makes it difficult and confusing to discuss these things, because having a term that simultaneously has an prescriptive technical meaning and an emotionally charged descriptive meaning is not conducive to reasoned discussion. (In my less charitable moments, I suspect that’s the point – that it’s motte and bailey; describe someone using a term with a charged colloquial meaning and when they object, rebut that you’re using the technical meaning).
It’s similar to what Scott called the worst argument, isn’t it? Using a label that to bring in connotations that aren’t merited for the example on their own terms–one shouldn’t react to whatever it is we’re calling white supremacy (Ta-Neisha Coates, wasn’t it? Or disliking Obama) the same way one would to organized lynching–simply because the example is in that category through some technicality or fiat.
@dndnrsn
I have to say, if you can come up with a charitable explanation for the behavior that doesn’t seem laughably bad to me I’d be impressed. Most likely explanation, it’s the rough equivalent of calling a democrat a socialist and then talking about how communists killed millions of people in order to try to make democrats seem evil. It’s bad behavior but also pretty typical. Most charitably, it’s a way to score academic humanities points for bombast and style; all the other effects are just unintended consequences. Worst case, it’s some combination of what you say above with what I think is the most likely explanation.
By way of indirect response: let’s tell a story about Trump.
For generations, America has been governed by a certain kind of elite: cosmopolitan, intellectual, politically correct, you know the drill. Many Americans have grown accustomed to the idea that this is what American government looks like. (They fervently swear, of course, that they hold no bias against non-elite Americans, although you might have a hard time convincing non-elites of that. Still, I’m sure they’re very polite when they meet Arkansans face-to-face.)
Suddenly, along comes Trump, flipping the bird to all those elites and their expectations. He’s not an intellectual. He’s not cosmopolitan. He’s certainly not politically correct. Worst of all: he wins.
Look at the elites in a tizzy. Things are not going as planned. Something, they feel certain, must be wrong. Look at him, sitting there in the White House, as if he belonged. Can’t people see that there’s something wrong? That his presidency is illegitimate?
He starts making policy. Obviously, those policies are bad — how could they be otherwise? Never mind that much of what he does is standard operating procedure, and they’d never have noticed if, say, Jeb Bush had done the same thing. (They certainly wouldn’t have objected to Obama.) Presidents don’t look like Trump! Presidents are polite to foreign leaders, and apologize when accused of sexism, and are careful to only say politically correct things about immigrants. What is the world coming to, if a person like this could get elected?
“Tut tut,” say the elite supremacists.
______________________________
I hope I have not failed the Intellectual Turing Test too badly. If you do not recognize the story, then the rest of this post will probably not work.
Assuming that it works at least a little: what’s my point? A few observations:
1. You might ask: who exactly are the elite supremacists? Where do you draw the line? It’s a complicated question. Relatively few elites will come out and proudly proclaim their hatred of the non-elite; many more will deny it — and even believe their own denial — while acting in various ways that undermine that claim. It’s a spectrum, not a binary.
2. The effect is not necessarily conscious, or even malevolent. Part of it is just a response to the violation of unstated expectations: “This is how things always worked, but now it’s different, and change feels bad. Explanations for why this changed thing really is bad are going to feel extra-compelling.”
3. This is, at best, a partial explanation for opposition to Trump. Many people would oppose every last thing Jeb Bush or John Kasich did with just as much fervor. Other people have legitimate disagreements with his policies. It’s hard to discern the true motivation for any particular opponent, and realistically it’s going to be a combination of all three.
4. Elites and non-elites are going to disagree massively on the size of the effect, and nobody is really in a position to evaluate it objectively.
5. This story obviously won’t convince elites to support Trump. Realistically, it shouldn’t — even if you eliminate any bias, they still disagree vehemently on policy. Nevertheless, it would be good for elites to take this story seriously: and, in particular, to consider that their viewpoint is not objective.
PS: Please don’t spend too much time pointing out the flaws in the analogy. I am aware; I could name quite a few myself. It’s an intuition pump, not a rigorous proof.
@quanta413
There’re many better explanations than “this is a deliberate scheme.” The explanation of “whoa, academia” is probably the best one. The tightening relationship between left-wing politics and the academy has been poisonous for both. In this case, it’s led to the volcanic-level hot take of “not being fully on board with The Program is on the same spectrum as being Richard Spencer; it’s just a lesser form”.
@Iain
Good post. I mean, my statements above aside, things do happen on a spectrum. However, spectrums end somewhere – definitions of white supremacy that have, at the lowest levels, things that are just indicators of being insufficiently with the program, are like definitions of the autism spectrum that include “occasionally being nervous in social situations” as being on the autism spectrum.
Not sure how this differs from my “most likely explanation”. My “most likely explanation” wasn’t that it’s a deliberate scheme. It’s just that if someone hates the outgroup and then they see a method of tarring the outgroup they’ll copy it. No cabal schemes to compare democrats to communists either; it’s just hating the outgroup. It’s intentional, and it’s poor behavior. But it’s also pretty common, and it’s not a scheme.
I really don’t see how they can be so blind to this. I admired the intent and execution of academia for decades. Now I see an increasingly corrupt influence of politics. Academia is being voluntarily or involuntarily co-opted for political purposes. There is a left wing politics / academia / journalism nexus that is self reinforcing and has damaged these institutions in the minds of the public. Opinion polls bear this out. This nexus believes they are “leading”, but seem to be dropping followers like flies. It’s gotten to the point that if you want to know what a professor from Columbia thinks about a culture war issue you might as well check the DNC website.
I have a feeling a voter referendum on funding the social sciences wouldn’t end well for the academy today. Public funding shouldn’t be a popularity contest, but the condescension of the public at large from this group is palpable.
It’s mostly the public face of academia that needs fixed, my daughters are going through school now and they do not report any indoctrination, although they both voted Clinton, ha ha. I can easily tolerate this. There are plenty of people who care about academia being respected, they need to speak up.
@quanta413
But consider: Academia is kind of closed and insular. What would actually work to harm the outgroup, to advance one’s ingroup’s interests out in the world in general, is not necessarily what will advance an individual’s interests within academia (be that in the faculty lounge, or in the fighting for advancement among PhD students, or in the sphere of left-wing campus activism, or whatever). Compare to a political party – while the party needs to advance its interests, overcome its political opponents, etc, within the party if you want to be the candidate or the chief of the party committee or whatever, you’re competing against other people within the party. To some extent, whatever gets said about an outgroup that isn’t present on campus/in campus politics, is really just a byproduct. Or, for another analogy, consider infighting between branches of the military, within branches of the military, etc – this continues to happen during major wars, even.
@tscharf
I went to school somewhere there was a fair bit of culture warring, but it was easy to stay out of it, by and large. It seems that in the years since I graduated, the culture warring has increased, and has become harder to avoid, but it’s still probably pretty easy to avoid if you just stay out of, say, student politics, etc. People here overestimate the degree to which it’s present on campuses.
My daughter’s experience at Oberlin was that the intolerant monoculture was much more the students than the professors. The professors were generally left as well but recognized an obligation not to be biased in their teaching. When one of them said something in class that implied that of course everyone there agreed with his political views my daughter pointed it out to him afterwards, he apologized, and (I think) later apologized in class.
The attitude of the student culture, on the other hand, was that if you did not agree with the left you were either stupid or wicked.
(My daughter can correct this here if she wishes–I’m working from memory of what she said.)
I’m tempted to link West’s piece with the suggestion to read it as if it was written ironically.
I’ve had the dream at different times for nearly every stage of education, including a program I transfered out of. I always my current age and stage in life though, but for some reason back at school.
Um, I hit the ‘report’ button by accident.
Sorry.
Why is no one asking the important question: does political lobbying improve health outcomes?
Well if lobbying is negatively linked to firm performance, and on average worse firm performance leads to a weaker economy, and a weaker economy leads to lower tax revenues, and lower tax revenues lead to less government spending, and less government spending means fewer resources for medical care, and fewer resources mean people receive less medical care overall, and less medical care leads to better health outcomes, then we can say that yes, political lobbying does improve health outcomes!
(cue Libertarians pointing out that governments never reduce spending when tax revenues decrease or something, and the whole chain of reasoning falls apart)
The story on bias has apparently got the historical origin of the term wrong, at least judging by a quick google:
My version is the anxiety dream where I’m supposed to be teaching a class and realize I haven’t been showing up to do so.
Lawn bowling balls are still very much biased, in this sense. Whereas a titled lawn wouldn’t be much help to anyone, I can’t think.
The sport has enjoyed a semi-ironic resurgence in Australia, frequently as a team building activity/excuse to drink in the sun.
It’s weirdly awesome to find out that professors have this from the other side.
Wait so nobody shows up for dream class!? That’s a huge weight off my dream shoulders.
Short film idea: both the students and the professor aren’t notified about a small class until pre-finals week. The professor is trying to figure out what the (nonexistent) guest lecturer covered so he can write a final, while the students bluff their attendance and individually try to steer the final’s material towards their personal comfort area.
Although the bias in bowls was originally made by a weight in the ball, in modern times it is made by the *shape* of the ball – it is turned to a (very slightly) ovoid shape giving a predictable tendency to curve to one side or the other depending which way you roll it. You can buy bowls with different degrees of bias – I have a set of my grandfather’s bowls turned from lignum vitae with medium bias.
Slanted – well, domed – bowling greens do exist in the game of “crown green” bowls played in the north of England, but although the slope of the green makes for interesting play (for example, shots with an S-shaped profile as the slope of the green and the bias of the bowl work against each other) it does not favour either player, other than the more skilled one.
Kipling has a story somewhere about a badly warped pool or billiards table somewhere in British India. People who have become skilled at playing on it regard playing the game on a flat table as much too easy.
Also had that one, though I continued to have the studenty dreams well after I stopped being a student (I don’t think I’ve had the student one for a while, but like most people I don’t remember dreams terribly well so I can’t say for sure how long it has been).
Professor’s version of the anxiety dream: This one seems to be pretty common among a certain population as well.
When I was in grad school, my advisor out of the blue asked me if I’d started having the dream yet where I’d forgotten about a class that I was supposed to be teaching, not taking. I bolted upright and said “Yes! Just a few weeks ago!” I was much struck at the time by it, as it was the opposite of the (for me common) dream about forgetting to attend a class I’d signed up for. The “teaching variant” wound up replacing the other one, and recurred too often for comfort.
In my case, contra ManyCookies’s suggestion, I would dream that the students had been attending the class, and had been manfully trying to carry on without me, and boy were they disappointed with me when, shamefaced, I showed up on the last day to apologize to them — their expressions were always most reproachful. Rather worse were the administrators, and the dream would usually pass away with a sound like thunder as some dean or department chair asked me to come into their office …
I wasn’t sure what to think of the Damore spectacle, but between him letting his lawyers do the talking and the revelation that Mencius Moldbug eating lunch is considered a security breach by Google, I have hopes of seeing Google suffer.
Him letting lawyers do the talking sounds like a good idea, but the brief seemed to me to be really horribly written, in a way that suggested Damore writing it and using it to pontificate. Sentences like “Google employees and managers strongly preferred to hear the same orthodox opinions regurgitated repeatedly, producing an ideological echo chamber, a protected, distorted bubble of groupthink” (and there are lots of these) don’t strike me as normal legalese. I’m actually pretty confused by this given his lawyer’s apparently good reputation.
Agreed, and the foolishness of representing oneself in court had me going “meh” about the case. This is curiously confusing.
I’ve heard that briefs can be written for public consumption more than for the court, and courts tend to just ignore the florid rhetoric.
Patent litigator here – the filing isn’t a brief; it’s a complaint, which is the starting document for a lawsuit. In most lawsuits, a complaint is a simple statement of the facts necessary to make a legal claim for the plaintiff (Federal Rule of Civil Procedure 8 technically requires that the complaint be a “a short and plain statement of the claim showing that the pleader is entitled to relief”). In high-publicity cases, however, it’s common for lawyers to prepare complaints with a lot more invective and and unnecessary details. This even happens in patent litigation – I once had a client who wanted a complaint filled with tons of unnecessary detail on the thieving Taiwanese manufacturer who was supplying all the infringing defendants.
Now, if you see that kind of unnecessary invective in a brief, someone’s probably doing something wrong. But this is the kind of complaint I would expect to see in a high-publicity case like Damore’s brought as much to embarrass the defendant as to win damages.
Commercial litigator here. +1 to this comment. I generally believe that lengthy narrative complaints like this one can be a) effective, in that everyone knows about this complaint and Google is thus very likely to take it very seriously and b) risky, in that Damore risks credibility shots against him during discovery if there are any factual or context mistakes in any of what he has written.
I suspect that most judges and clerks will read this complaint with some interest. Bbeck310 is exactly right, though, that judges almost universally dislike invective in briefs and arguments. Parties do it all the time, but it’s distracting to the merits and often creates invective feedback loops that almost always end up being a giant waste of time and money.
Right. The goal here might not even be to win the case on the legal merits.
It might be to dig up so much embarrassing dirt in discovery that even when Google “wins” the case in court they lose so much more in the court of public opinion.
Which in turn might force Google to settle the case, thus actually providing Damore a legal win of sorts.
At least that’s what I read somewhere. I’m not smart enough to know if that’s a reasonable prediction.
An alternative objective, if the motive for the suit is ideological rather than financial, is to cause Google to substantially reduce the behaviors that Damore is complaining about. That doesn’t require either a win or an out of court settlement, although one or the other would help.
A lot of the logic of the situation from that standpoint depends on what one believes about the motivations of the people running Google. At one extreme we might suppose that the decision makers don’t themselves care about m/f ratio, ideology, or any of that stuff but think their employees do, and are creating an environment friendly to one ideology and hostile to another because they believe that attracts more people than it repels and can be done with less effort than the alternative, since lots of employees will voluntarily help with doing it.
On this interpretation, if the case creates significant costs for Google in bad publicity, or if Damore wins and that verdict increases the future costs of continuing the policies he complained of, one would expect Google to shift to less ideologically biased behavior.
Assume, at the other extreme, that current policy reflects the strongly held ideological views of the people running Google. Google is rich enough so that it can afford to lose quite a lot of money through bad publicity, out of court settlements current and future, and the like. If the people in control are committed social justice warrior types they are likely to accept those costs in order not to be seen, by themselves and their side, as giving in to pressure from the enemy.
That’s likely to be very effective. As a programmer, I’ve lost much of my esteem for Google and any desire to work there due to the complaint alone – the political atmosphere sounds oppressive, and I can’t trust any employer that allows people in supervisory positions to openly express animus towards any race or gender, as Google appears to have done here. I’ve heard similar sentiments from other people in my field, as well.
And part of my reaction was to wonder if I should sell my Google stock–not because I expect the case to lower its value, although that’s possible, but because I would prefer not to be a partial owner of a firm that acts in ways I disapprove of.
What percentage of programmers do you think might feel this way?
And how would you guess that compares to the percentage of programmers whose esteem for Google might increase, because they think that Nazis like Damore deserve to be oppressed, and they actively desire to work for companies who are principled enough to stand up against such evildoers.
Am at Google. The internal split (of people who cared to voice opinions) was split about equally between A: “He might be right”, B: “Anyone in group A is a bigot”, C: “I didn’t think Google was oppressive but group B makes me think it is an issue”, D: “We shouldn’t be talking about this/ I am tired of it”.
I would guess that externally about 25% of programmers had a worse opinion (of some degree) and 25% have a better opinion to some degree.
So, if your estimates are accurate, this is a net wash for Google. Any negative PR is counterbalanced by positive PR.
I think too many rationalists are predicting bad things for Google here by assuming that most people think discriminating against people with non-leftist opinions is wrong. I am here to say that no, not only does about 1/3 of the country think discriminating against Damore is acceptable, they think it’s a very good thing and the only problem with society is that more people aren’t doing it.
@Matt M: so let Google keep doing it, and let’s everyone who’s not a Leftist boycott Google.
Addendum: My source was a Megan McArdle column: https://www.bloomberg.com/view/articles/2018-01-12/silicon-valley-will-pay-the-price-for-its-lefty-leanings
Also, my guess for why Google does this is not so much that the management has a strong personal devotion to intersectional feminism or whatever, but that of all the forces they’re squeezed between here they’re most afraid of some kind of of lawsuit/legislation over gender discrimination/imbalance.
One thing is certain, when you have as much money as Google, everyone wants some of it!
From what I’ve heard, Google CEO Sundar Pichai played Pontius Pilate on Damore. He really wouldn’t have wanted to fire Damore, but Youtube boss Susan Wojcicki, who was Sergey and Larry’s landlady back in the 20th Century and then was Sergey’s sister-in-law for awhile, had her feelings hurt when one of her five children asked her about Damore’s memo, so Damore had to go. (Susan sister Anne, who used to be married to Sergey before taking up with slugger Alex Rodriguez, is the CEO of 23andMe, which is pretty funny when you think about it. I’m not sure, however, that Susan ever got the joke.)
And, yeah, reports of this kind of high-estrogen soap opera lowered my estimate of the chance that Google could successfully build self-driving cars.
Do you think this sort of nonsense isn’t happening at every other company working on self-driving cars?
I am definitely going to start describing things as “high-testosterone nonsense”.
Stuff — including dominance games among managers — gets described that way all the time; it’s not taboo.
As for Susan Wojciki’s little story about her daughter, if Steve Sailer believes that one is literally true, he’s the only one.
BTW, the self-driving cars are over in Waymo (which is not part of Google), so separate from this bit of drama.
Sure beats the phrase “toxic masculinity” anyway.
@Matt M: The ‘political atmosphere’ I was referring to wasn’t the specific beliefs being debated, but the web of organizational blacklists, loud public arguments, purity tests, grudgeholders archiving statements and passing them to their lawyers, physical threats, and other general nastiness revealed by the complaint.
Even if you think Damore deserved to be fired, no one wants to work in an office where supervisors openly brag of discriminating against people, being friends with someone on a secret organizational shitlist could have serious impacts on your career, and any one of your co-workers could be combing through your social media and message history and archiving anything that could be characterized as hostile. Look at the people whose communications were named in the complaint – they’re certainly sweating it now (and I noticed that several no longer list their employment at Google on social media).
It’s a freaking job, not an election. I just want to get paid and make cool stuff!
Susan Wojcicki …had her feelings hurt when one of her five children asked her about Damore’s memo
I’m agreeing with The Nybbler here – whenever some opininator or other person making a high-profile mountain out of a molehill drags out the anecdotes about their cute wobbling-lipped little tyke (or older child) asking “Mommy (or Daddy), why did the nasty man say that nasty thing?” and then Mommy (or Daddy) writes an entire screed about how this ripped out their heart and they had to curl up in a foetal ball on the kitchen floor because how can they explain to Cute Blond(e) Tyke about the cruelty of the world, how can they look into the eyes of innocence and destroy a child’s trust in the goodness of human nature – I’m calling bullshit. It’s just another way of using “Won’t anybody think of the children?” – I don’t care or it doesn’t affect me, that’s not why I’m putting out a hit on this guy, but my sweet little baby – I have to protect them!
Remember all the stories about weeping toddlers convinced Trump was going to come kill their families in the night in the aftermath of the election? Or in the run-up to it? Like the tweets in this article? Yeah, I don’t think so – if any kids were saying stuff about Trump it was because their parents were having hysterics about the election in front of them and scaring their kids themselves:
This, specifically, is what I disagree with.
I think you are assuming your own feelings are universal. They are not. The reactions to the Damore memo indicate that there are, in fact, a whole lot of people who DO want to work in an environment where people are openly discriminated against, so long as those people are white males with conservative opinions.
It’s not even that they “don’t care” that Damore is discriminated against, they care about a lot… in a positive sense! They want him to be discriminated against. They threatened to resign from Google if Damore wasn’t fired.
Some people do want their workplace to be more political, so long as it’s the “right kind” of politics. They love that they’re allowed to spend work time and work resources discussing their experience as a giant yellow dragonkin – if that was taken away, they’d quit and then sue for discrimination against dragonkind.
And I’d suspect that while these sorts of people might be minorities in the workforce as a whole, they probably aren’t minorities among the demographic Google cares about – tech people who live in the bay area.
The values of fairness and non-discrimination and “leaving politics at home” are not universal. Not even close.
Yeah, just look at veeloxtrox’s comment if you want to see how certain Googlers think. I guess technically group B might want to keep their politics out of the workplace, but that type doesn’t tend to – and judging from some details out of the Damore lawsuit, that type didn’t in many cases.
@Matt M: Again, I’m not referring to politics in the left-right sense but the Louis-XIV’s-royal-court sense, where you have to spend more time worrying about whether you can be seen with the person you’re talking to or if you agreed with the boss sufficiently loudly enough to be noticed. No one likes this and it’s the death of productivity.
@AnonYEmous: “I don’t like this kind of political atmosphere” encompasses groups A, C, and D, for different reasons. Even Bs get tired of a nonstop drumbeat of politics, as many of my friends on Twitter have.
edit: On an unrelated note, Steve Yegge just quit Google citing (among other things) the company being ‘mired in politics.’
Okay… but I don’t see how this is relevant to Damore. The only dimension on which anyone cares about him at all is left-right. The point here is that you don’t have to worry about the royal court… unless you’re a Nazi, in which case, thank God we have those royals to give you the punishment you so justly deserve!
All the gay-trans-latinx-otherkin people aren’t looking over their shoulders at all. To the extent that they play politics, it’s because they want to. They have no fear of any of this stuff. Nor should they.
Matt:
I am extremely skeptical that you have an accurate picture of the views of the average Google employee. Is there data anywhere on this? Because what you’re providing seems like a caricature.
Yeah, if you believe that, I have a bridge to sell you. If your children can’t handle the cruelty of modern politics, it’s because you’ve been scaring them about the cruelty of modern politics. There is no one else to blame — as long as there aren’t tanks in the streets or secret police hauling off relatives, your kids don’t have the context to scare themselves. I remember being that age. I had zero independent political opinions.
Nornagest:
My kids (16, 12, 8) ask questions about politics and express opinions all the time. They don’t necessarily understand everything that’s going on, but they do sometimes get upset or worried about stuff, want to go to protests, etc. My eight year old daughter was very upset by the election of Trump, and while we discuss politics at home, my sense is that she picked this up from seeing or hearing stuff outside the home.
One sideline: my 16 year old had the AP US Government class last year. I’ve been really impressed by how much that has changed the quality of questions he asks and observations he makes–the class seemed to really do a nice job of giving him a basic understanding of how various levels of the US government work and are supposed to work, and giving him at least a little context for understanding current events.
Sixteen is a little older than I was going for, since I can’t see anyone considering it a horrifying revelation if their sixteen-year-old daughter asked them some awkward political questions. I’ll concede that younger children might end up getting scared outside the home if the other adult influences in their life are pointing that way, but I doubt that’s what happened in the Wojcicki/Damore case — an elementary school teacher would talk about Trump’s election, but probably not about hiring practices at Google.
I recall my own AP Government class being about when I started developing an independent understanding of politics, too, so I guess they must be doing something right.
@Matt M: I guess your experience is different, then. I mostly work with adults.
According to the recent Wired article, they do have fear of this stuff being used against them. They didn’t think they had to look over their shoulders, but now they’re realizing they do, because the same forms of retaliation (leaks, HR complaints) are being directed at their faction too.
But also, few if any of them were seeking out this environment when they joined Google. They may participate because it’s preferable to quitting, or because from up close, it feels more like righteous self-defense than a stressful, soul-crushing time sink. But had they known in advance what it’d be like, they might have avoided the situation.
Fong-Jones is a major culture warrior and should not be considered a reliable source. That said, it’s clear that leaks are now considered fair play by both sides; the leaking of the Damore memo seems to have broken the taboo. As for HR complaints, the Wired article claims that one employee got a verbal warning for posting “fire all the bigoted white men” (I believe this is the meme at page 121 of the lawsuit, labeled “47” at the top). Another person, on the other hand was publicly rebuked by top management (page 104, marked “30”) for posting a question to the “Diversity TGIF” dory for asking if diversity efforts would address the underrepresentation of whites. This isn’t in the lawsuit, but if I recall correctly it was a response to another question asked by an Asian Googler who wanted to know if the diversity efforts would result in discrimination against Asians, who were overrepresented; that Googler was also reprimanded.
So indeed, it is possible for SJWs to do things which get them a response from HR and management. But they’ve got a LOT more leeway.
Yup. Exactly right. I was a commercial litigator for about 15 years, and “know who your audience is” is a basic tenant when drafting anything.
For most everything else, the audience is the court, so like Bbeck says, drafting motion papers with this kind of kitchen sink approach and emotional appeal instead of statements of facts and law is generally a bad idea. But the complaint? Embarrassing the opposing part with a salacious description of their misdeeds can help pressure them to settle, or to change their ways, or just infuriate them.
The media would not be publishing articles on this complaint if it read like your typical cookie-cutter personal injury complaint. Add in enough of the lunacy going on at Google, though, and it’s a different story.
I think I share the same wish as you, but I doubt any legal team on the planet can out-maneuver Google in court (and sustain it), if for no other reason than Google’s budget >>> everyone else’s. When it comes to high-profile, potentially Orwellian-nightmare-exposing lawsuits, would they not rather pay people to shut up forever and go away?
Pay people to shut up? Google couldn’t pay Moldbug enough to shut up. Dude basically thinks he’s Zoroaster as far as wanting to destroy those who serve the Lie regardless of personal or social consequences.
Crushing conservatives and sundry witches in court because of their budget is a different matter.
I sincerely wish you weren’t joking.
I’ve never studied the issue formally, so I’ll just speculate that the amount of money that a company pours into litigation is only weakly correlated to success. Put a little differently, the impact of Google putting $6 million into defending this case won’t have much more impact, if any, than Google putting in $5 million. There likely wouldn’t be any way for Google to put $100 million into it under any circumstance, so the idea that massive legal expense necessarily leads to victory really isn’t right.
I’m not a class-action expert, but I know at least a little about it. The rules don’t permit a payoff to the named plaintiff in order to get rid of the entire class. The real key to these cases is class certification, which is the procedure in which a court determines if the set of plaintiffs described in the complaint is a proper class. The question there is whether individual issues predominate over common issues (which would result in no certification) or whether common issues predominate over individual ones (which would result in a certified class). There are other requirements, but that is the usual battleground.
Here, there’s also the underlying question about whether or not there is liability even if you take all of Damore’s fact claims as being true. There are different procedures to tackle those issues — typically either motions to dismiss or motions for summary judgment.
I take your point and thank you for stating it.
When I picture Google vs. Some Guy et. al. i’m reminded of the cases I came across when I worked in forensic engineering. When Joe Schmoe’s Landrover did something unexpected on the highway and killed his family, Landrover would respond to his lawyer’s request for information by sending them a truckload of documents that would take 1000 lawyer-hours to sift through. I know there are some procedural rules around it but that was the gist of it.
Similarly, Google can probably afford to make Damore’s (and anyone else’s) life as miserable as possible in the process.
Almost all discovery is now done electronically and is machine-searchable, and there is commercial software available that is quite strong at sifting relevant documents from a mountain of trash. Your Landrover anecdote sounds entirely correct historically, if somewhat less relevant now. I also suspect without knowing that Damore’s case is sufficiently public and political in nature that he may be able to raise capital to fund expenses to deal with burden issues like this.
I absolutely agree that Google is likely to find every single thing that Damore has ever said or done online, and he’ll have to answer for anything that undercuts his credibility. And if he has any serious skeletons, he will be miserable indeed.
Right, but the joys of discovery mean that Damore’s lawyers are likely to find every vaguely relevant thing that Google’s suits have ever said or done in the quasi-privacy of their corporate email accounts. If the worst Damore has to answer for is the manifesto and the FIDE fib, he may consider that a fair trade.
Any third party paying for Damore’s lawyers, will likely consider that a very good trade.
For those unaware, he’s got a fundraising campaign for his lawsuit:
https://www.fundedjustice.com/d1JmT3?ref=ab_7FxTKvj6ICY7FxTKvj6ICY
only $25,000 as of writing this, but that’s $25,000 more than I’d be able to raise if my boss fired me so *shrug*
True, but “every time” would be once, in 1998. The 1976 act (effective 1978) doesn’t really count in my book, because copyrighted works were expiring regularly up until then.
Popular “wisdom” is often wrong; the political and business worlds adhere to many easily-disprovable folkways.
Longtime commenters here are probably tired of seeing my arguments that the power of money is gigantically overrated in politics. I see no reason for an epistemological crisis.
A corporate lobbying shop, surely, is constantly at work inventing reasons why it’s absolutely vital to the parent organization. The execs back in Dallas or Denver or Seattle, who rely on those same people for “inside” knowledge of D.C. politics, are unlikely to challenge those contentions.
I don’t know if it’s a case of lobbying just “cancelling other lobbying out”, or lobbying just being nearly useless. It’s like the debate over campaign tactics and spending in political science – the evidence seems to suggest that most of it is useless or limited in effect compared to the “fundamentals” of an election, but is that because they’re simply useless, or because they become nearly useless when everyone is using them?
Well the interpretations are wildly different tho.
If lobbying just cancels each other out, it’s dangerous to cut off lobby budget, because in that case maybe your opponents’ lobbying won’t cancel out and it will work against you.
I don’t doubt there are instances where lobbying is useless, maybe even lots of them, but I also find weird discussing this in the same page where we are discussing whether IP owners will lobby for another copyright extension before 2023. And they also lobbied for the DMCA, net neutrality, etc, etc.
I don’t know, I’m fairly skeptical of the scale of the benefit of lengthy copyrights to copyright holders (or of piracy suppression). I recall studies showing that most profitable creative works make almost all of their profit in the first year after publication. Extracting profit from old works may be less worthwhile than investing in new works, though of course the lawyers will try to convince their employers that it’s worth doing. They might also make more money producing cool works based on the older property of others than they do milking their own old properties. In the case of piracy it may do more to give works publicity than to undermine sales (some studies have suggested that). That they get the legislation they want does not guarantee that the legislation actually profits them.
It’s hard to say if Disney would hurt or benefit from losing Steamboat Willie. What if someone makes a Steamboat Willie-Mickey spinoff and it is wildly successful and popular? does that benefit Disney, who holds the rest of the IP/trademarks of Mickey? I would think so, as they’d be on a better position to exploit a sudden Mickey craze.
But as time moves forwards, they would lose more and more of Mickey and that advantage goes away. There’s no good reason, in the long run, to give up such a monopoly, particularly if your loss only becomes bigger with time.
Luckily the current Disney moneymaker IPs are far more recent (Marvel and Star Wars, and the Pixar films) with the oldest one being Spider-man from 1962 which puts losing him at a p good distance.
But they also hold a v profitable stable of “Disney Princesses” whose loss to the public domain would start only 9 years after Steamboat Willie.
I don’t see any reason to not at least try to get another copyright extension
The present value of money to be paid to me 75 years in the future is really low, like three cents on the dollar or something (assuming I entered the formula right in Excel, and assuming a 5% discount rate). So adding years of copyright protection is an incredibly inefficient way to add extra incentive for me to produce things worth copyrighting.
@ albatross11:
Not as inefficient as you think. The nominal interest rate includes an allowance for expected inflation. Real interest rates, nominal rate minus the inflation rate, tend to be about one or two percent, not five percent. At a one percent discount rate, a dollar 75 years in the future has a present value of about 47 cents. At two percent, about 23 cents.
It’s the real interest rate that is relevant because inflation will raise the nominal value of what your copyrighted work sells for.
The long term profitability of the IP itself may be limited, but I could see a potentially important effect in holding off rival media companies. Audiences are more likely to seek out known IPs than new ones and thus a new company could have an easier time building a fan base by working off of known stories than trying to present a new one. Getting funds from investors is certainly easier enabling higher quality production from a new studio. So the effect may less be because “I want to make profits off this IP” and more “I want to make it harder for start-ups to achieve success and challenge me in the market over the long term.” An interesting test here would be the frequency of new media company starts in places with weak copyright vs places with stronger copyright.
David:
Wow, thanks for pointing that out. It never occurred to me that I should be thinking of the real interest rate there!
Not sure why anyone is even slightly surprised by this. Classic Red Queen effect. In an arms race, there is frequently no net gain for the participants. Were the researchers not aware of or expecting this outcome?
For most of my adult life, I falsely believed that the Presidency and Senate seats went to whoever spent more money, and the only thing that trumped being able to buy office was gerrymandering.
Well, the last data I saw was that every factor of 2 by which one candidate outspent another shifted the popular vote about 1% in their favor. Hardly decisive at the margin but it does mean you have to play the game so you don’t get outspent by a factor or 100 or something, especially since there are large contingents of ‘always votes Democrat’ and ‘always votes Republican’ that won’t be swayed making that 1% more valuable.
These studies are easily destroyed by correlation:
The most popular candidate will naturally get both the most votes and the most money donations. The natural naive analysis will then “prove” that the money caused the votes.
If your study doesn’t explicitly mention they’ve controlled for this, you can assume they didn’t.
Yes, that’s why the study compared money raised by the politician versus money spent by previously wealthy politicians out of their own pocket. The former correlates with victory much more than the later but the later does still have an influence.
I wonder if the juxtaposition of that link paragraph and the next one (re: Mickey Mouse copyright) was intentional. Surely, the latter is an example of lobbying effect that did help a company and its shareholders?
Likewise other very concrete efforts. Surely the hedge and PE funds got their money’s worth in that the recent tax bill did not eliminate the carried interest loophole despite near unanimous agreement by everyone other than them that it ought to be.
As far as I know, the carried interest loophole is something that benetifs the management of hedge funds, not the funds themselves directly or the shareholders of the funds. I’ve already commented on copyright. And in any event, for the thesis of the paper to be true, it would only be necessary that the revenue from lobbying be no greater than could have been earned by other uses of the time and resources, not that the revenue be zero. A final point; successful lobbying may carry a cost in bad publicity, as well as greater oversight and meddling from a government that thinks it deserves a return for the favor.
As I understand the management of funds are generally separate entities from the funds themselves, those entities have profits and losses, and are the ones that paid the lobbyists. The major exception would be vanguard which has a co-op structure where the funds on the management entity.
So I think it makes sense to look at the cost/benefit from the POV of view of the managing entities rather than the funds. That said, if the managing entities themselves have employees and owners, we’d want to look at the interests of the owners rather than the employees.
Microsoft didn’t start lobbying in a big way until the antitrust case against them. So lobbying may not help, but not-lobbying can be an existential threat.
Lobbying is Pascal’s Wager for businesses (just like so much else in business operations, really.)
Everyone *HAS* to do it, even if it doesn’t seem to work, because if it turns out that it actually does work than anyone who didn’t do it is boned.
If everyone had to do it then spending money at all would be > not spending money and would show up in the analysis in some way.
You don’t even need an understanding of politics. Just an understanding of how corporations work will point out that they waste a lot of money on whatever the insiders’ hobby-horses are.
I’d also add that lobbying suffers from Moloch – a lot of companies end up lobbying for causes which are unprofitable for them (like Coca-Cola and Pepsi being part of a lobbying group that advocates against climate regulations even though their own climate record is actually pretty good).
That AI bias article was awful. The author is correct that statistical bias is different from discriminatory bias — but in the kinds of settings propublica is talking about we don’t want the latter.
I happen to have gone to a talk by the author of the propublica expose in question, and I can assure you, she knows what she is talking about, and surely understands the difference between these two types of bias.
Journalists aren’t misleading people, they are correctly pointing out regression models can encode awful biases in our society. And other ideas are needed than regression models.
Simple example: regressions will fail to parole folks with big criminal records. But in places like Baltimore it’s very easy to get a long criminal record for no better reason than cops loving to hassle African Americans even if they are doing nothing wrong.
In that case, though, wouldn’t better AI notice that “black Baltimorians with criminal records” are outliers and should be disaggregated from the group “people with criminal records”?
The AI won’t notice (or won’t notice as much as it “should”), because more-or-less-law-abiding black Baltimoreans will continue to be hassled if paroled, thus increasing their risk of reoffense (even though, in the hypothetical, the reoffense is bogus).
In theory, a sufficiently competent human might be able to account for this by accepting higher reoffense rates for black Baltimoreans than other demographics; I’m not sure if they do, or how we would check if they’re allowing the correct amount of slack.
This isn’t a problem with the algorithm, it’s a problem with the system described by the algorithm, and/or the metric being targeted.
If you really want to predict which individuals are going to be arrested again, then an unbiased algorithm will (and should) measure any bias in the police’s own processes that affect the arrest rate, and use that to make an accurate prediction.
If you change the target to “chance of violent reoffending” or “chance of reoffending excluding trivial nonviolent charges”, then your algorithm should be able to pick up the delta between these two probabilities.
And furthermore, if we run a number of different classifiers, or pick a regression algorithm that outputs the weights on a decision tree used to produce predictions, then we should be able to see what factor is contributing to the disproportionate number of blacks in the population of reoffenders. For example in this case given all the numbers, the classifier could output the skewed arrest-rate/crime-rate, and if those weights were published, then it would be much easier to spot this kind of bias in the underlying system described by the algorithm.
A regression model, of the type Northpointe uses, learns an optimal mapping from inputs to outputs in a given class. It can’t notice anything.
—
Anyways, here’s our first take on the problem:
https://arxiv.org/abs/1705.10378
I think this might be the root cause of the pushback you’re receiving; color me skeptical that there’s any approach for which no examples produce counter-intuitive conclusions.
Better AI or a jury of their peers. This article describes a study comparing COMPAS with a wisdom-of-crowds approach using people from Mechanical Turk to estimate risk to reoffend: https://www.wired.com/story/crime-predicting-algorithms-may-not-outperform-untrained-humans/
The humans did slightly better.
(On the topic of the wisdom of crowds–and the prev SSC post–I wonder if anyone has ever tried a dating app based on that concept? Like, users could be required to suggest some matches between profiles from a distant city to access more of their own matches).
Crowd-sourced dating suggestions is a really neat idea.
I was very skeptical of Luna (especially the cryptocurrency aspect), but if they implemented this feature, it would be really interesting. You could reward users with stars for correctly guessing which profiles other users would want to see (as determined by how the latter users rate their matches). That sounds fun! Who wouldn’t want to play a little matchmaker for random internet strangers? I bet some people would sign up for that even if they weren’t looking to date, just to play it as a game. You could also have leaderboards for top matchmakers, and an option for satisfied couples to send thank-you notes to all the users who matched them. I suppose there could be issues with perverse incentives, privacy, or whatever, but I don’t see why it would be any worse than baseline for internet dating.
“leaderboards for top matchmakers”
I can definitely see some people getting really into that!
I wonder if that could even work as a third party app that used a bot or something to scour eharmony or match for profiles and showed them to players semi-anonymously to match.
@Tenacious D
If you get a streak, do you get a supply package, or an airstrike, or something?
You have the ability to award flowers, nice perfume, or, at high levels, a string quartet to the next pair you set up on a date.
Of course, matches occurring as a result of a match-streak reward don’t count towards your next reward, but they do help you level up faster!
This sounds way cooler than Luna. The waiter walks up to the table. The couple sitting at it, a year after their first date their, out for their anniversary, look up. “Excuse me, but I have a bottle of champagne for you, courtesy of killerdawg42069. He thanks you for continuing his streak.”
@ dndnrsn and Randy:
I like your streak reward idea!
I haven’t read the ProPublica piece, but most people I’ve seen write about this, or talked to about this, seem to have the misunderstanding that the authors think people have.
My sense is that there’s a fairly wilful attempt to conflate the two, or even a refusal to recognise any meaningful distinction between “this algorithm leads to worse outcomes for [protected group] than for others” and “this algorithm unfairly penalises members of [protected group]”.
I’ve linked a number of people to Scott’s Framing for Heat Instead of Light as a tech-agnostic explanation of what’s wrong with these stories. It’s often helpful just to distill down the insight that “ties into correlates with race” and “creates new action based on race” are vastly different events.
And yes, I agree that many people actively fail to employ this distinction. I haven’t read ProPublica on COMPAS, but their series on bias in auto insurance made the error – even after acknowledging that case and claiming to avoid it!
In some cases this can be thought of as “introduces error” vs “propagates racism error”. (We could also I suppose have a category for “amplifies”.) It’s very hard (usually impossible!) to do better than propagating error…
Yup, a lot of people who write fairness papers in ML are ?confused about/aware of? this point, also. Luckily, this is changing.
The whole concept of disparate impact is about recognizing facially-neutral rules with worse outcomes for protected groups as unfair. There’s a principled version of this position, but I think it’s fair to say popular articles aren’t going to a ton of effort to explain “this is biased in the disparate-impact way, which is different from classic bias but still important”.
It is worth noting that the disparate impact legal test is part of burden shifting frameworks, rather than being the sole element needed to be proved to win a case under the civil rights laws. There seems to be some willful misstatements of how the law works in this area (when pushed back on, often defended on the grounds that it is de facto sufficient without any evidence for that claim).
If protected groups differ from other groups in ways besides the protected characteristic, disparate impact may not be unfair.
Nybbler: You’re right, and I realize the way I phrased that made it sound like I meant disparate impact is always unfair. It’s not. But it’s sometimes unfair, even in some cases where the rules don’t seem overtly unfair.
In the case of COMPAS, the actual effect depends on how the judge uses the score. But if the algorithm’s risk score were the sole factor in parole decisions, I do think it would be unfair, despite being accurate.
Why? Well, do we want parole decisions to be based solely on recidivism chance? Maybe we want it to accomplish other goals, like incentivising good behavior in prison. And most people feel that justice decisions should be based on things people have done, rather than things we have reason to believe they’re going to do (as I learned anew when I brought up Robin Hanson’s private justice proposal last open thread). Throwing out these considerations because we have an accurate algorithm for recidivism chance would be wrong, and I suspect wrong to the disproportionate detriment of black people.
All else aside, this whole discussion is a good example of why I tried to write up some exposition on unfair outcomes from fair tests, which I find myself linking over and over. An algorithm that “fairly” penalizes a group in terms of the point estimates it produces will be unfair (in a mathematical sense that’s sometimes a “fairness” we care more about) as an aggregate decision process.
By “most people” do you mean “random people on the street”? If so, of course they will misunderstand, random people on the street don’t know what EITHER type of bias even is. People on the street will also think that quantum computers can solve NP-complete problems in polynomial time, regardless of how well someone like Scott Aaronson might write a user-friendly explanation that this isn’t the case.
But I don’t think it would then be fair for a physicist-turned-data-scientist to then write a post about how Scott Aaronson is misleading people.
Here’s a gamble I’m willing to make with you. Lets show the ProPublica article about COMPAS to a sample of regular people. We’ll ask them:
Which of the following is the main thesis of this article?
a) The COMPAS algorithm incorrectly and systematically predicts blacks (but not whites) will commit new crimes if released on parole.
b) The COMPAS algorithm accurately predicts blacks are more likely to commit new crimes than whites if released on parole, and this leads to more blacks being kept in jail. The authors consider this unfair.
I’ve seen virtually every commenter on this article (on reddit, hackernews, etc) mistakenly believe (a). For example:
https://www.reddit.com/r/slatestarcodex/comments/73f1pe/culture_war_roundup_for_the_week_following_sept/dnsp2g3/?context=3 https://www.reddit.com/r/slatestarcodex/comments/73f1pe/culture_war_roundup_for_the_week_following_sept/dnsh26w/ https://www.reddit.com/r/slatestarcodex/comments/7qk2bq/culture_war_roundup_for_the_week_of_january_15/dsw4qtf/
Do you believe that 75% or more of regular readers would choose the correct answer? If so, I’m willing to gamble against you.
I posit that most people will take away:
c) The COMPAS algorithm treats Black people unfairly
and won’t think very deeply about it beyond that.
The problem is with the language. Stucchio and Mahapatra think that “people” understand bias the way they do. People don’t. People understand it the way the journalists doing reporting do. (Source: Am a statistician who has to talk to people that aren’t all the time about things like statistical bias.) SSC readers may be more literate in statistics jargon, so that might not apply here, but for the general populace, including plenty of people with graduate degrees in technical fields, the definition they’ll think of is the colloquial one.
As for the substance of the article, I’m nonplussed. Their argument in its entirety is that, “An algorithm that minimizes loss functions succeed at minimizing the loss function it was given, therefore it’s a good algorithm.” This is a tautology. The criticism with which they need to engage is, “The algorithm may not be minimizing the correct loss function.”
The ‘common use bias’ vs ‘statisticians bias’ is a red herring, journalists obviously aren’t intending to use ‘bias’ in the statistical sense.
Yeah, that’s my point. The article Scott linked says the following:
“The media is misleading people. When an ordinary person uses the term “biased,” they think this means that incorrect decisions are made — that a lender systematically refuses loans to blacks who would otherwise repay them. When the media uses the term “bias”, they mean something very different — a lender systematically failing to issue loans to black people regardless of whether or not they would pay them back.”
This is of course absurd. When “an ordinary person” uses the term “biased”, they mean it in exactly the same way the “media” is using it here.
Sorry I should have been more clear that I was agreeing with your point, but just suspected that it was deliberate on the author’s part to pretend that the definition of ‘bias’ was the source of the issue.
Google:
bias: prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.
discrimination: the unjust or prejudicial treatment of different categories of people or things, especially on the grounds of race, age, or sex.
I expect that most common people don’t even understand the distinction and believe that either both are the case or neither.
It would still be better to not link articles that claim that all journalists talking about AI bias are peddling junk. My understanding is that people are concerned about situations where:
There’s a difference in groups A and B due to some upstream factors, e.g. group A tends to have lower socio-economic status; and instead of learning to discriminate using the (useful) socio-economic status variable, the AI learns to simply distinguish using the (also useful, but more obviously unfair) labels A and B.
And that’s _not_ what’s happening here. The labels A and B aren’t even given to the algorithm.
Are you talking about the ProPublica piece? Because they Jacobite piece isn’t just responding to that:
Though as an aside, you could hide the labels A and B and still run into trouble, if your AI has access to variables that correlate with being A or B but still don’t directly influence the measure actually targeted. E.g. ‘likes hiphop music’ will still be a useful variable even if it’s not the ‘correct’ one.
I’m not sure if it’s relevant to your argument, but this gets into another aspect of justice which I suspect is more complicated.
There are two questions here which might need to be separated:
1) What elements of socioeconomic status, racial discrimination, etc., led this person to the point where they were convicted of the given crime?
2) Give the person in front of them, what is the likelihood that they will [violently] re-offend.
I think it’s possible that the answer to #2 may be calculated with a reasonable degree of accuracy/precision, but people are mostly worried about #1 and don’t have a good way to address that. So they attempt to address #2 to address #1 because it’s a more tractable problem
I would distinguish between trying to prevent crime before it happens and trying to punish crime after it happens. I would be more sympathetic to using race, as in “racial profiling,” to prevent crime. I’m wary of using race as a factor in determining punishment for crime.
The problem is that the two issues overlap in questions like determining when to let a prisoner out of prison, which has both a punishment for a past infraction aspect and a prevention of future crime aspect.
That, say, an Asian Indian-American in prison for a violent crime would be less likely statistically to commit another violent crime than an African-American with the same history of violence would suggest that it would be a statistically more efficient use of prison resources to let the Indian out before the black.
On the other hand, the idea of punishing one individual more than another individual due to racial differences seems distasteful and not in sync with the best principles of Anglo-American jurisprudence. I don’t really know how to resolve the two feelings.
I feel like modern parole rules are a nonsensical black box that we’d be better off eliminating entirely.
Have standard sentences that are the same for everyone, based on the crime. You cannot get out early for “good behavior” or whatever, and once you’re out, you’re out, none of this “check in with your officer and take three drug tests a week” nonsense. Bad behavior (escape attempts, assaulting a guard, etc.) carries standard additions to your sentence. Good behavior is the expectation and carries no “reward.” Bad behavior carries extra punishment. No elaborate system for letting some people out early with a ton of strings attached while others continue to languish in jail.
What would be so wrong with this?
@Matt –
Eh. I think that it’s reasonable to have flex in the system. Firstly, because there is probably value in ‘stepping down’ confinement/oversight on severe criminals to help guard against relapse. Prove that one can be on good behavior with a little freedom and one gets a bit more freedom.
Secondly, because situations do change and a society might end up with more prisoners (or fewer facilities) than they had planned on, and having an alternative correction scheme already coded in can prevent log jams and rash decisions.
Thirdly, because plea-bargaining and prosecutorial discretion are real things, and if prosecutors and judges can’t arrange for the guy convicted of a sympathetic manslaughter to spend only five years in prison, they’ll arrange for him to be convicted of negligent homicide or whatever else it is that results in the desired lesser punishment.
Fourthly, because even if you do away with plea bargains juries are still a thing, and so is Google, and if there’s a mandatory twenty-year sentence for manslaughter then juries will know that even though you tell them not to. And then the guy who committed a sympathetic manslaughter won’t be convicted at all.
Fifthly, because the Bastille was in fact stormed. People don’t want everyone who commits the same crime to do the same time, and they won’t tolerate it. People want the guy who in a fit of passion shoots his daughter’s rapist to spend less time in prison than the guy who in a fit of passion shoots his more successful business rival, and one way or another that’s what they are going to get. Whatever you put between them and that objective, is going to get broken.
Steve:
I see your point, but I think something like that is inevitable if you do any statistical forecasting to decide which prisoners to release early. You’re basically going to put each prisoner who’s up for parole into some kind of bin based on people like them–people my age, sex, education level, people who committed the same crime I did, etc.–and then, you’re going to judge how likely I am to commit future crimes based on what other people who were similar to me did.
One possible angle on this difficult question is the matter of time: racial profiling is more reasonable under time pressure.
For example, your daughter’s car breaks down at night in a bad neighborhood and her phone is dead. She can walk either north or south to get help. North of her she can see six African-American teens loitering on the corner, while south of her she can see six Laotian-American teens loitering on the corner. Which way should she go? Is she wrong to use race as a major input in her decision-making?
In contrast, a parole board has time to research in some depth into individual characteristics, so putting a high value on race would seem more questionable.
I have read the ProPublica piece and I assumed that it was ignorance. But if Ilya insists that it is malice, I will believe him.
What do you think I am asserting?
That’s a good question, actually. I think perhaps folks victimized by this don’t have the resources, and aren’t aware enough to contact the ACLU. This absence of lawsuits might change, actually…
Or they may fear that they will lose (a belief in institutional racism/sexism implies a belief that a judge will not be fair) & believe more in ‘direct action.’
I went through local newspaper reports for 2017 homicide totals in the 51 biggest cities in the country and Baltimore had the second highest homicide rate (after St. Louis) last year, so apparently somebody is doing something wrong in Baltimore:
http://takimag.com/article/president_trumps_murder_report_card_steve_sailer#axzz54u0kGUCA
Unusually racist policing could really easily create a crime-wave. Both from people with high melanin going “I´ve been arrested before, I´ll get arrested again. If I am going to do the time, might as well actually do the crime” and robbing the nearest bank when they cant get a job due to their bogus criminal record.
And also from the police arresting the wrong people, or failing to catch career criminals simply because they have made themselves so very unpopular that noone tells them anything, even when they are looking for awful people.
This could also apply if the police is unjustly perceived as racist.
Have you looked at pictures of Baltimore City’s elected leadership and its police force?
That they are black does not mean they are not racist against blacks.
That they are black does not mean they are not racist against blacks.
I think black people can absolutely have biases against other black people. In general, people are quite capable of having prejudices against their own group; I’ve encountered some hardcore SJ white people with pretty strong anti-white bias.
So yeah, a cop being black doesn’t mean he’s not racist, but I do think that introduces a level of nuance and complexity that tends to get brushed under the rug in the “racist white cops murdering blacks in the name of white supremacy” narrative. Sometimes people will try to spin it as “well, those anti-black black cops are just tools of white supremacy; they’re puppets being manipulated by white people, so it’s still the doing of white people.” But I think this is a pretty transparently desperate tactic. Racism, and prejudice in general, are complex psychological issues that can’t generally be reduced to one group oppressing everyone else.
I’ve wondered whether random policing is worse than no policing. People are less likely to think that obeying the rules/living decently will do them any good, and there’s no chance for local policing to develop.
I highly recommend this comment from the subreddit. It’s clear that the author either does now know what she is talking about or set out to deliberately mislead.
Also, Inherent Trade-Offs in the Fair Determination of Risk Scores.
> According to their own data, black offenders and white offenders with a given risk score were equally likely to be convicted of another offense within two years of release. But that wasn’t consistent with the narrative they wanted to tell, so they buried that part.
This person is confused, this sort of equality is completely irrelevant to whether discrimination is occurring (because conditional distributions tell you nothing about discrimination in general). This person, in addition to being confused, has a political axe to grind, as well.
—
Now, the fact is, there _is_ a conversation taking place within the algorithmic fairness community on how to formalize biases of interest properly. However, one thing that probably is not going to help is ask_the_Donald level hot takes.
The entire premise of the propublica piece is about conditional distributions. Their big finding was that:
The entire premise of the propublica article is that Northpointe is selling proprietary regression models which people use in sentencing and parole hearings now. How sure are we that these regression models aren’t discriminating? I don’t think Northpointe thought about this very hard.
There is a huge conversation now in statistics and ML about what properties, precisely, a statistical procedure should have to avoid bias, of the discrimination variety. Lots of papers, and so on. I am going to the “ethics and AI” conference colocated with AAAI this year, and (aside from alignment and so on), addressing these types of discriminatory biases is a big topic at this conference:
http://www.aies-conference.com/
All due to articles like the propublica one. The entire premise of the author’s talk, by the way, is that academics were falling down on the job, so journalists like her had to learn a bit of data science to start the conversation. Academics are listening now, and working on the problem, so her goal was accomplished.
—
By the way, we analyzed COMPAS data, and based on what we were able to get our hands on, we concluded that what they are doing is really bad (by our lights). If you want, we can talk about that in detail, or you can just read our paper.
—
You are trying to kill the messenger.
Given that, conditional on risk score, people were about equally likely to reoffend regardless of race…pretty sure?
I’m not sure about where you are suggesting discrimination even enters into the question here. Are you saying that white people are less likely to be caught if they reoffend, so the equal reoffense rates are hiding discrimination beneath them? If so, that’s hardly a matter to be solved at the level of risk scoring…
I am saying I don’t think you thought very hard about what the legal definition of discrimination is or ought to be. As it happens, folks in law _have_ thought fairly hard about this (similarly to how they thought fairly hard about causation, with their “but for” test). It might be worthwhile for you to read about that stuff. It’s sort of directly relevant.
Anon – I used to agree 100% with that reddit comment you linked to, but after a bit of thought, I think we can meet Shpitser half way. See my comment here.
I think we have to start with a definition of discrimination we both agree on. Once we have that down, it’s just math.
I suppose the _legal_ definition in the US basically comes down to disparate impact, but that is because they start with HNU as an axiom. Certainly, if we optimize for the legal structure instead of reality we would end up with very different things, I don’t disagree with you on that point. But the legal definition is no different than the ideal reality of the journalists.
The definitions of discrimination that appear in the legal literature are not all based on disparate impact. It’s sort of similarly complicated to causation in law, see David Friedman’s comment in this thread. We use a particular counterfactual one in our paper.
We could talk about whether it’s sensible (e.g. captures human moral consensus on discrimination) I think it is, and it does.
“I think we have to start with a definition of discrimination we both agree on.”
The entire point of the article was that journalism misinterprets the term bias in an (intentionally) inaccurate way. Hand wringing over academic definitions is the motte / bailey doctrine in action as far as this debate goes. The plain reading of journalism’s use of bias and discrimination is one of immoral intent / outcome.
Ilya,
You seem to be reluctant to spell out what you mean by “bias” and “discrimination” even though you seem offended that other commenters aren’t on the same wavelength as you as to whatever it is you mean. Could you illuminate us by copy and pasting from something else you’ve written what exactly it is you mean?
“Bias” is an overloaded term. There are lots of kinds of bias — I gave some examples of selection bias in data generation, did you read it? There is also “statistical bias,” which is expected error.
There’s cognitive biases, which LW folks like to think about — the classification project for those started with Francis Bacon, I suppose.
In epidemiology there is a whole taxonomy of biases — Berkson’s bias, selection bias, confounding bias, and so on. Do you want to get into this? I am not avoiding discussing this stuff, I teach this stuff to undergrads. I just didn’t think a full discussion was relevant here. Although I did notice the author of the article that started this seemed to conflate statistical bias of an estimator and selection bias in the data.
—
“Discrimination” is not so easy to define. Which I think is one of the main takeaways here. If someone thinks it’s easy, I think that’s an indication they have not thought about the problem very hard. This is mirroring an earlier difficulty people had defining what “actual causation” means. It seems like this should also be easy, but it’s really really not.
We have a definition in our paper linked to causal paths. There are about a million papers that do other definitions. Obviously, we think ours is better motivated (in part because of a link to existing legal definitions), but there’s room for argument.
—
I think it’s sort of ironic you accuse me of reluctance where I literally linked a paper I wrote spelling this out, and spent all of yesterday trying to discuss it. Are you reading the same thread as me?
I could have said “we define discrimination as the presence of a path-specific effect. Don’t know what that is? Here are three papers on this, but trust me it’s really good.” But I don’t think that would be a very effective way for me to communicate.
I did try to describe informally, in a few places.
—
I wouldn’t say I am offended, but it is true that there is a certain class of slatestar poster that I don’t really wish to engage, that I don’t think contribute anything interesting, and that I wish went away.
Well, that’s what the folks here were complaining about. The journalists were conflating some concepts and this is misleading to the public.
Summing that with your first comment:
If you are talking about the same person (sorry I’m getting confused), then (1) either there is a contradiction between the statements, or (2) she made a mistake when writing, for candid reasons, or (3) she made a mistake when writing, for malicious reasons. Elsewhere in the thread, you seem to be defending (2) and specially rebuking (3). That’s fine, we can believe it was candid; even the Stucchio piece was not saying they were disonest ill-willed people.
@moscanarius
I think “the article that started this” is Stucchio’s.
@rlms
I don’t know, he mentioned in another comment that he thinks the propublica authors discussed poorly. And it makes more sense that “the article that started this” is the propublica one, at least the way I read.
In fact, I think this most of this confusion sums up to Ilya objecting to Stucchio saying the propublica piece was “misleading”, on the grounds that he thinks “misleading” implies “bad intentions”, and he believes the original authors are not up to evil (because he’s met one of the authors).
Hi I meant the person replying here is confused, based on their first comment.
ProPublica folks probably _also_ are confused. And just to be clear — I don’t blame people for being confused, because what we are talking about is difficult. I am not playing gotcha. The fact that we shouldn’t criticize people for trying to address hard topics, even if they get things wrong, is precisely what my entire point is.
Then in all fairness, why not to state it clearly and stick to it? Because I’m not the only one who noticed you shooting to all sides on this thread.
I didn’t read Stucchio as criticizing the propublica people for “trying to adress a hard topic”, but for making statements Stucchio thought could mislead the public. Sure that’s allowed? If their criticism is wrong (as you think they are) and the propublica people are right (which you don’t think? You have not been very clear), then of course it is fair for you to defend the propublica piece. But otherwise, your only criticism is that you think Stucchio was too heavy-handed and not super precise herself. No need complicate the situation.
Let’s see:
1. You think the propublica piece was good because it brought attention to an important problem, even if some people could take the wrong conclusions.
2. You think despite being good, the propublica piece made some poor characterizations (yes? probably? not? you have not been clear). You acknowledge that this may lead to people getting the wrong ideas.
3. You think the Stucchio piece in response was bad because:
3a. It says the journalists are misleading people, and you object to this characterization because you don’t believe the journalists have bad intentions.
3b. It says the journalists are ignorant of the key concepts, and you object to this because you know one of the propublica journalists and can vouch for her knowledge and honesty.
3c. It is too harsh a criticism, and may disincentivize other journalists from writing about important topics. That’s killing the messenger, somehow.
3d. It is an irrelevant criticism, as every text can be misleading, and it doesn’t matter very much if the common people are getting the wrong ideas, since the scientists that matter got the right one.
3e. You think the Jacobite piece is advancing the wrong arguments itself.
If that’s it, here’s my take:
3a. “Misleading” doesn’t always imply bad intent. I think the only part of Stucchio’s piece that could be seen as pushing to the assumption of bad faith are the last paragraphs. And notice that the piece does not refer only to the propublica article. Overall, I think they mean “misleading” in the (very common) sense of “phrased in a way that induces the audience to error, despite intentions on the contrary”.
3b. There are 4 authors in the propublica piece; you can vouch one. Can you be sure this friend of yours was involved in all of the writing? Can you be sure the other 3 are as qualified? Can you vouch that every author, editor, and advertiser involved was as candid?
3c. I highly doubt the disincentive would be so strong. Jacobite mag is not exactly the backbone of the press; its readers are not so influentional. Besides, harsh criticism is available for everything journalists write. I think you are overestimating the critics power here.
3d. The fact that all texts can be misleading is no justification to deny criticism of any one specific text for being misleading. I disagree that if the Smart People got it right there is no reason to point out what could mislead the other folks. ProPublica is not an academic publication, supposed to b read by scientists with the proper qualifications. You may not want to spend your time doing this, but what’s so wrong about other people doing it?
3e. That was not your original criticism, though.
EDIT: I see that Stucchio has said in a comment here that she thinks the propublica piece was click-motivated. That adds more weight to your criticism, even though it does not appear in the original piece.
Thanks!
1. Agreed.
2. Yes, I think propublica was not thinking about “machine bias” (in their title) in the right way, and has been criticized for this, by lots of of folks. That is ok. This is hard to get right. I think they were thinking about this problem in the wrong sort of way without an intent to mislead.
3a. Agreed.
3b. Journalists who work on science reporting may sometimes be bad (e.g. innumerate). This particular person whose talk I went to did not strike me as innumerate. She struct me as well-read, and well-intentioned. This does _not_ of course imply that she ought to be able to crack the “machine bias” problem. Numeracy is necessary but not sufficient. I don’t view cracking the problem as her job in the ecosystem.
3c. Agreed (well, on the margin).
3d. Agreed. Also there is a correlation/causation issue here. Just because a layperson read A and got confused about B does not imply reading A is what did it! It’s possible (and in fact is what usually happens) the layperson lacks the background to get the proper takeaway from A, in which case we have to decide how to attribute their bad takeaway to their lack of background vs A itself.
It is possible to communicate to a broad audience badly, of course, and we should try to do it well. So a criticism I would accept is “you did popular science badly, because of X,Y,Z.” But even here, the issue is usually incompetence rather than malice.
Folks who tend to mislead maliciously about science topics are, by my lights, more often academics themselves (for career reasons), rather than journalists. Of course journalistic fraud is also a thing.
3e. I may or may not agree. I noticed the author said some things that really did not seem right to me, though. But here, in the comments.
—
I also noticed that my exchange with the author ended with the author saying I am trying to bury with obscurantism.
—
You know, I am pretty relaxed about disagreement. I disagree with lots of people about lots of things. Many of these people are my friends and colleagues. I don’t think we need to posit bad intent every time there is intractable disagreement on the internet.
@Ilya
That’s OK, my doubts are answered. And I agree that the problems with any piece are most likely to come from incompetence than malice. It’s just that I don’t see much reason for the beef with the word “misleading”, or for going around throwing side arguments when the main point could be more clearly stated. Sorry, but it looks like derailing, even when it is not the intention. Well, that may be just my preference, you are allowed to have yours.
Maybe I am bad about tone reading, but your first comments did not struck me as “relaxed”, and I am sure I was not the only one to think this. Maybe much of the latter misunderstandings followed from there.
I agree, but who’s this directed at? Well, nevermind.
I know nothing much about this particular issue, having read neither the original article nor the critique. But I think the usual problem is bias, not incompetence or malice. The journalist sees an academic article which can be interpreted in a way that fits the conclusion he wants to push (or which makes a particularly striking story). Because he agrees with that conclusion he is not inclined to look very carefully to see if that’s really what the article implies, still less to look carefully to see if the article itself has problems. So he writes up a popular piece that presents the article as supporting the conclusion he wants, with rhetoric designed to make the case look as strong as possible.
I think deliberate dishonesty is probably more common there, if only because the journalist has more wiggle room to interpret things the way he wants due to ignorance. On the other hand, the journalist may be being dishonest not in the facts he reports but in how he reports them. For example (from my blog).
An example of the scientist problem (from my blog).
I think this is a very general bias that applies as much to readers as to journalists. If I am making a case for some idea that you already accept, or want to accept, then in general I can make a really weak argument, produce a graph that doesn’t really agree with what I’m saying as evidence, maybe link to a couple papers or some data that doesn’t really support my argument, and a distressing number of people will nod their heads and say “yep, case closed.”
I also think it’s likely to be a mix of journalist fooling themselves and journalists convincing themselves that they’re telling the truth despite some noisy data and a somewhat confusing set of facts. I don’t have strong evidence here, but as best I can tell most journalists don’t want to lie even for a good cause. (There are journalists who lie or make up stories or just flat write propaganda, but I think most journalists don’t want to do that kind of thing.)
I suspect this is rather similar to what you can see from the replication crisis in the social sciences–rarely, there’s some actual fraud (I think some of the original priming research was faked, for example), but far more often there were social scientists convincing themselves they’d discovered something interesting out of some very messy data and complicated experiments, using sophisticated statistical tools that they maybe didn’t 100% understand. And I suspect they found it easier to convince themselves of results that re-enforced their existing political and social beliefs, and that would also be appealing to their advisors and colleagues and tenure committees and such.
This phenomenon makes me uneasy when I see news reporting on a culture-war-heavy story. It’s not that I think there are all that many journalists who are willing to lie, but there do demonstrably seem to be a fair number who will omit a lot of information that doesn’t fit the story they want to tell, or who will structure their stories so that the comfortable narrative is told in the headline and the first couple paragraphs of the story, while the facts that complicate or undermine the narrative only appear in the last paragraph or two of the story. It’s one reason I wish there were more right-leaning news sources that I actually thought were trying to do real reporting rather than propaganda–someone whose journalists were mostly Republicans would have a different set of blind spots and expectations and comfortable narratives, and maybe we’d get a better picture of reality as a result.
Journalism isn’t homogeneous. My observations have been that environmentalism and race relations are areas where monocultures now dominate. The biggest form of bias here is selection bias, consistently telling only one side of the story. This is very evident if one examines which anecdotal stories are reported.
To my mind, being ‘discriminatory’ would mean treating people who were otherwise alike differently based on their race, which in this context would mean that given a black and a white person with equal recidivism risk, the former would have a higher expected risk score than the latter.
Now, if the risk scores perfectly reflected recidivism risk then (i) they would not be ‘discriminatory’ in the above sense and (ii) it would also be true that black and white people with the same score had the same recidivism risk.
But let’s suppose instead that score = recidivism risk plus an error term (which has the same distribution for everyone). Then this is still “non-discriminatory” in the above sense. However, a black person with a given score is now *more* likely to recidivize than a white person with the same score.
Therefore, if a black person with a given score is equally likely to recidivize as a white person with the same score, then the system *is* discriminatory in the above sense.
Interesting.
(However, the fairness criterion ProPublica seems to be interested in – whether the proportion of non-recidivists who get high scores is the same between black and white defendants – has even less to do with “discrimination” as I’ve defined it above. It’s also a less intuitive thing to look at than ‘whether people with the same risk scores have the same reoffending rates’. It seems like the only reason to look at it at all is to generate propaganda for consumption by the mass public.)
It all comes down to the old issue about defining fairness and equality – you can have equality of opportunity (or a fairness of the process) or equality of outcomes (or a fairness of the results), but you can’t have both, they are inherently incompatible for any substantially different demographics. Given equal opportunity, you’ll have different expected outcomes and if you want equal expected outcomes, you must grant unequal opportunities; so also in this case, if you want a system to have results according to the true risk of recidivism, it’ll have to take the protected criterion (or a correlate) into account; and if you will ensure that the process is race-blind, it’ll have unequal results.
Thus, every system will be unfair in one way or another, and you can only pick the side or some tradeoff in the middle. You *can* argue that the side/tradeoff should get chosen differently, but simply criticizing a system by saying that it’s unfair by one definition or the other is silly, it’s like criticizing water for the (very inconvenient) property of making stuff wet.
Not really. What I’m saying is that there are two separate notions of “race-blindness” here that conflict with each other:
(1) People of different races with same risk score should have same risk of recidivism.
(2) People of different races with same risk of recidivism should (on average) have same risk score. [Better: they should have the same probability distributions of possible risk scores.]
The COMPAS algorithm is claimed to have property (1), but I would argue that
– Property (2) is intuitively what we mean by “non-discrimination”
– Perhaps surprisingly, (1) and (2) are not logically equivalent.
– If property (2) is true then property (1) will fail in a way that looks biased against whites.
– If property (1) is true then property (2) will fail in a way that looks biased against blacks. And this is more important because (2) is the ‘true meaning’ of non-discrimination (I claim).
Note: if in fact black people of a given risk score turn out to have a higher (not equal) risk of recidivism then it’s possible that property (2) holds after all.
(Finally, I will remark that this distinction between (1) and (2) has nothing to do with ProPublica’s findings here which are misleading and disingenuous for exactly the reasons Northpointe identified in their response.)
How do we measure this, even in principal? We don’t know the true risk of recidivism of a given individual, not even after the fact, and there doesn’t seem to be a natural way to bucket them.
I’m having difficulty working out why (1) != (2), let alone why adhering to one produces the appearance of bias on the other in the specific directions you specify; would you be willing to walk through the logic?
To The Nybbler:
That’s a critical question. It’s a ‘leap of faith’ to assume that there’s a single right answer (at a given time) to the question of what a person’s true recidivism risk is.
One extreme but consistent approach is to argue that the people who actually did recidivize have ‘true recidivism risk = 1’ and those who didn’t have ‘true recidivism risk = 0’, and then I think my logic becomes identical to ProPublica’s, doesn’t it?
The other extreme but consistent approach is to say that the true recividism risk is whatever you get once you calibrate the COMPAS score (modulo bucket error). In this case, there is no “discrimination” (by my definition).
And between those two extremes I can’t see any *principled* way of defining the ‘true risk’ (unless we’ve just assumed it as a leap of faith).
I concede: my entire line of thought doesn’t go anywhere useful.
To Ghillie Dhu:
At the outset, let’s switch recividism risk for height, so that we can unproblematically talk about normal distributions.
Imagine height is the sum of two factors A and B. Assume that black people are taller on average. Imagine that when we try to measure height we can only see factor A. Say that A is the person’s “height score” whereas A+B is the true height.
This will satisfy property (2) as long as the following holds: given A+B, knowing a person’s race tells you no more information about A.
Is property (1) satisfied (under that same assumption)? Not necessarily, and in fact probably not. For instance, we could have A and B as i.i.d normal variables with one mean m for whites, and a higher mean m’ for blacks. Then having observed A = a, the expected height comes to a + m for whites and a + m’ for blacks.
Let’s keep A and B as independent normal variables but now say that the mean of B is the same for blacks and whites, while the mean of A is higher for blacks than whites. Then property (1) is satisfied: if a black and a white person have the same “height score” then they have the same expected height. But property (2) now fails: given a white and a black person of the same height, the white person probably has a lower A than the black person.
To go back to recividism risk “A” would represent everything that goes into a score like COMPAS whereas “A+B” would be “the true recidivism risk”. However, the only two meaningful intepretations of “the true recividism risk” in sight are:
(i) “true recividism risk” = “the recividism risk conditional on the COMPAS score alone” in which case the whole thing becomes trivial and properties (1) and (2) are equivalent
(ii) “true recividism risk” = “1 or 0 depending on whether the person actually recidivizes” in which case property (2) becomes: “conditional on whether white and black offenders recividize, they ought to have the same scores” which is the criterion that ProPublica were using.
@Ghillie Dhu I’m trying to get an intuitive understanding of this for myself, so let me see if I can explain it to you.
Let’s choose a non-racial scenario. Let’s say we have 100 rich kids and 100 poor kids interested in coding. 90 of the rich kids go to top colleges, and only 10 of the poor kids do. 50 of the kids in the top school learn to code well, and this is uncorrelated with their race, so the breakdown is 45 rich + 5 poor. Only 10 kids from the bottom school do the same, again uncorrelated, so the breakdown is 1 rich and 9 poor.
Google hires the 100 students who went to top schools. Google argues position (1): “Our standard has 50% probability of predicting good coders, for both rich and poor students”.
Google’s critics argue position (2): “14/60 good coders, or 23%, grew up poor, but only 10% of Google’s hires did!”
If I were to try to say it with no numbers, I would say that the poor, good coders problem is that they are hard to find among the pool of poor, bad coders, while the rich, good coders are relatively easy to distinguish from the rich, bad coders.
I think I’m getting there (thanks for indulging); to check my understanding:
For criterion (1) there are four subpopulations of interest:
(a) White high-scorers
(b) Black high-scorers
(c) White low-scorers
(d) Black low-scorers
Ideally the reoffense rate of (a) should match that of (b) AND the reoffense rate of (c) should match that of (d).
For criterion (2) the four subpopulations of interest are:
(i) White reoffenders
(ii) Black reoffenders
(iii) White non-reoffenders
(iv) Black non-reoffenders
Ideally the distributions of scores of (i) should match that of (ii) AND the distributions of scores of (iii) should match that of (iv), but if the relative sizes are such that (ii):(iv) > (i):(iii), then (b):(a)>(d):(c).
Is this just Simpson’s Paradox then?
An algorithm that factors in race (or some proxy for race) to determine parole risk is in a real sense biased even if “people with the same risk scores have the same re-offending rates”. The algorithm is using race to keep people in jail- punishing some people for being a certain race. That is bad.
If you took anything away from Stucchio’s article, it should be this, so why are you protesting it so hard?
Your argument is worthy of engaging, but it’s not ProPublica’s. Your argument is that, without conscious counter-discriminatory measures, already-existing discrimination could be made worse, because the data inputs themselves could be tainted. ProPublica’s argument is that the algorithm thinks black people are more likely to re-offend but they’re not. I guess how you get from their argument to yours is saying “they’re not but the cops lie about them being so and so they are arrested”, but ProPublica cited evidence of black people not being in jail despite being labeled so; if they’re being harassed by police and jailed unjustly, wouldn’t they, uh, be in jail?
As to your actual argument, I don’t like it for several reasons.
First off, it’s just the generic progressive / social justice argument, and I say this not so I can blast it with my superweapon, but because it embodies a lot of the flaws of those arguments. Namely: it’s like that in Baltimore, but is it like that in other cities in Maryland? Is it even like that in all the parts of Baltimore, for all black people? And how do you know, anyways? Heck, how do I know if what you’re saying is accurate? There’s no easy answer to all of these questions, so introducing conscious counter-discrimination has potential to backfire drastically. In my opinion, social justice’s backfire has led to more racial tension; if the system is literally geared towards releasing black criminals onto the streets, you had better believe that there will be more racial tension !!!
So maybe you avoid the problem of AI deciding altogether by letting a jury of peers do it instead; according to this comment thread that’s slightly more accurate and I believe it. OK, but we’re mostly back to the same problem: how do the jury of peers know to discount accurately? I guess if they have local knowledge that kind of helps (and this is probably your strongest argument), but even then they may not know well; moreover, while they may be counter-discriminatory on their own, they also may be discriminatory (i.e. racist), and either way creates problems.
Moreover and finally, the biggest problem here is that Baltimore police are apparently harassing citizens. I don’t think that there will be a bigger problem than that, and it should be fixed regardless; as it is fixed, the issue of algorithmic bias fixes itself. So…why not more effectively target that? Yeah, I recognize that it’s difficult and it’s been worked on a lot, but still.
“First off, it’s just the generic progressive / social justice argument”
For folks who wonder why I don’t want to engage a certain segment of the commentariat here and want them to go away, it’s this type of stuff.
This is a purposefully bad way of framing the conversation. The public’s interest is in minimizing the (expected and actual) number and severity of crimes committed by felons upon early release. A felon’s interest is almost irrelevant here, since there is no right to early release and in any case a felon loses some rights due to the felony committed.
Also, there is a difference between prejudice and discrimination. Whenever some, but fewer than all felons are granted early release, discrimination can be said to occur. The issue is what criteria do we use to discriminate between them. The best outcome is when this discrimination is not based on ill-conceived human prejudice (and I invite the interlocutor to consider the possibility that his opinions are also biased, no less than mine). And no, I don’t care about the deeply flawed US legal definition of discrimination.
Probably a person on the street who is not a rabid leftist would agree with me. Anything else is an attempt to obfuscate the issue.
@Ilya I am interested in reading your paper. Is https://arxiv.org/abs/1705.10378 the right one?
On a quick skim, it looks like I may also need some help with the vocabulary so, if you want to post a summary for the “good at math but don’t know ML and stats” crowd, that would rock.
Are you David Speyer @umich.edu? I would be happy to chat on skype/g+/zoom about our paper, if you wish. Or by email.
Yes, that arxiv url is the right one.
—
There are some stats details that are possibly not relevant, but the basic point is we need to think about a mathematical formalization of a sensible definition of discrimination that people would agree with. We cite a legal opinion that gives a very clear counterfactual “had everything else been the same” definition, which I think is the discrimination analogue of the Humean counterfactual definition of causation. Hume won — we all use his definition now in statistics. I hope that legal opinion will also win.
Once we have that counterfactual definition, it’s just a matter of formalizing it, using a (?the?) language of counterfactuals (which causal inference people happened to have developed already). The right way to do data analysis to avoid discrimination falls out of that.
Yes, I’m that David Speyer. Feel free to e-mail if you want to. I don’t see a real reason to talk one on one rather than here, but am very willing to. I’m just someone whose been following the debate, been frustrated by the lack of mathematical clarity in earlier publications, and thinks the topic is interesting. And I usually respond to the challenge to “read our paper” by clicking over to the arXiv. 🙂
If I am correctly following this, you mean the “but for” definition of causation.
The situation in law is a bit more complicated. My standard example is the situation where you meet a friend on the street, stop to chat for a moment, he then goes on and, a minute later, is crushed by a falling safe. If you had not stopped him he would not have been under the safe when it fell, so you are the but for cause of his death. But you are not legally liable.
One, I think the usual, explanation is that the consequence was not forseeable. Put in probabilistic terms, the conditional probability of his death was not affected by your conversation given that, from your standpoint, exactly when the safe fell was a random variable.
The real case is Berry v. The Bourough of Sugar Notch, 191 Pa. 345 , 43 A. 240 (1899).
Note that while I am a retired law professor I am not and have never been a lawyer, so my expertise is in the logic of the problem, not the current state of the relevant law.
“But for” in law is for torts (correct me if I am wrong, IANAL). But causation in law comes up a lot, so there are lots of other definitions / criteria. I agree with that.
I think fairness/discrimination is similar, there are some attempts to formalize in legal opinions. There is no one test. We picked the most reasonable one.
The case I cited was a tort case. The argument was that if the trolley had not exceeded the speed limit it would not have been under the tree when it fell. Exceeding the speed limit was illegal hence per se negligence. So both the but for requirement and the negligence requirement were satisfied.
Despite which the court ruled against the plaintiff.
The legal definition quoted in the paper is:
“The central question in any employment-discrimination
case is whether the employer would have taken the
same action had the employee been of a different race
(age, sex, religion, national origin etc.) and everything
else had been the same.”
I find it clear and correct. My (naive?) take would be that an algorithm that does not have the protected characteristic (age, sex, etc) as input cannot discriminate based on this definition – because the decision is based only on ‘everything else’.
I suspect that the paper says something different,but following the math would take a while….an explanation or example (if possible) would be appreciated
“My (naive?) take would be that an algorithm that does not have the protected characteristic (age, sex, etc) as input cannot discriminate”
This is the first reading of this, and it (except in very special cases) doesn’t work.
An informal explanation is proxies for race — even if you don’t include race in your model, you might include a causal consequent of race that essentially determines race (e.g. zipcode). So you have to think about _paths_ from race to your decision. For example the path race -> zipcode -> decision based on zipcode. Formalizing how to think about paths is what the paper is about.
@Ilya Shpitser
It appears that your paper’s analysis of the COMPAS algorithm assumes that all effects of race which are not mediated by prior convictions are assumed to be bias. Is this so?
The Nybbler: I wouldn’t take the preliminary COMPAS analysis we did too seriously. We only had access to a limited subset of their data, and had to approximate their predictions by learning (Northpointe does not release theirs, proprietary info).
But I think criminal record and similar types of things were the only mediators that seemed to us to be ok* to use.
(*) In reality, of course, even criminal record isn’t perfectly ok, for reasons described elsewhere here — cops hassle differentially. We had some discussion of this very point in peer review.
—
As I recall, though, we made some modeling assumptions (we weren’t looking for a scary result, just wanted to get the procedure to finish in a reasonable time), and got a fairly scary number. But again, the purpose of the paper wasn’t to give a definitive analysis, but to demonstrate the operation of the method.
Actual convincing analysis is (a lot of) future work.
I remember some years ago I read a paper on criminology when I was in a forensics chem class. Basically, what it said, is that something like 90% of violent pre-mediated crimes and 95% of repeated violent pre-mediated crimes were commited by people who were easily recognised as criminals by average test subjects picked off the street. If so, it shouldn’t be that hard to have an AI that recognizes such signs.
Not necessarily, I think. I think that judging the intentions of other humans is a difficult task that I’d expect humans to be actually good at (as opposed to simple things that we’re crappy at, like arithmetic; a 1980s texas instruments calculator can knock the pants off anyone), both because of very strong evolutionary selection for being able to do so, both in a cooperative and competitive sense, and the fact that we have the architecture to emulate them.
After seeing the article, I was about to jump in and post something similar, citing your advisor. So glad to see you beat me to it.
Journalists are absolutely misleading people when they talk about bias, infer it is intentionally immoral, and never print the underlying legitimate data that is the source of that bias. It is very rare to read a story about sentencing bias that even mentions racial disparities in violent crime (as the above comment does not). AI algorithms are there to remove disparities such as death sentences, not to insert them.
There is a huge group think of racial disparities exist, therefore immoral racism. A burden of proof to establish immoral intent has left the building a long time ago.
I have never once read a journalist’s analysis from the usual suspects of a culture war issue that stated the biases are factual and merit based but the real problem is they “encode awful biases in our society”. I think the issues would gather more traction if they were discussed like this, versus those who disagree are automatically labelled as x, y, z.
Anyone can compete in the marketplace with “unbiased” algorithms against competitors with “awful biased” algorithms. Citizens should have a say in any regulations that distort this process. Perhaps people repaying loans is actually causally correlated to things other than skin color such as innumerable other cultural issues that just happen correlate to skin color. People mostly earn their bad credit ratings.
To the extent that these problems are chicken / egg issues that should be addressed, but not necessarily by the heavy hand of government putting its thumb on the scale, and definitely not by the fourth estate hiding the root causes of an issue and only telling one side of the story.
Just dropping here to say that this discussion is above your pay grade, based on what you said here.
I have COMPAS data, it’s freely available (well, some of it). You can get it too. I bet you wouldn’t be able to make heads or tails of it, or whether there is discriminatory bias in it. Or what discriminatory bias even is.
“Journalists aren’t misleading people, they are correctly pointing out regression models can encode awful biases in our society.”
I suppose journalists are experts that are on your illustrious “pay grade” and are experts on the academic definition of discriminatory bias and fully articulate those differences to their readers. My reading of how they handle the subject of discrimination and bias suggests they do not.
I bow to your imminently high opinion of yourself.
Thanks for the ask_the_donald level hot take.
Author here. ProPublica did not show that the COMPAS algorithm exhibited discriminatory bias – in fact, they show it doesn’t. The COMPAS algorithm does not directly discriminate. It does not exhibit statistical bias. And it does exhibit calibration – a 30% risk score for a black person means the same thing as a 30% risk score for a white person.
What ProPublica showed is that more blacks than whites have risk scores in the marginal range, e.g. 30-50%, and remain in jail as a result. People in this regime have a higher false positive rate – 50-70% of them would not have committed a new crime. As a result, blacks overall have a higher false positive rate.
If we fixed this problem, then 9% more people will be murdered/robbed/raped by the black criminals we release. Or alternately, a lot of innocent white people will remain in jail. https://arxiv.org/pdf/1701.08230.pdf
If this were true, then it will show up as statistical bias. Criminal records will stop being predictive of committing new crimes.
However, blacks are not wildly disproportionately hassled in proportion to their involvement in crime.
Or similarly, see this article by Phil Lemoine which covers similar ground. In fact, blacks have contact with the police at roughly the same rate as whites:
http://www.nationalreview.com/article/451466/police-violence-against-black-men-rare-heres-what-data-actually-say
I fully believe that Julie Angwin knows what she’s talking about. In my view she’s being deliberately misleading because it generates clicks.
Thanks for stopping by!
“The COMPAS algorithm does not directly discriminate.”
And how do you define this?
—
“If this were true, then it will show up as statistical bias.”
Statistical bias is expected error. This issue is NOT NOT NOT about the expected error of an estimator. It is about the data set itself being generated in a biased way. Us causal inference types think about this problem a lot (in another context than discrimination, usually).
Now that you said this, I am extremely worried.
There are many issues. This is an issue to worry about, but it is not the issue considered by the ProPublica analysis.
By “directly discriminate” I mean “make use of the `race` flag in the data”. I understand that the literature has all sorts of fancy definitions of algorithmic fairness. That’s not my critique.
My critique is that a casual reader of these articles will come away thinking that algorithms are making incorrect decisions, and that the solution is to somehow make more correct decisions and fewer incorrect ones.
What do you think a casual reader will think?
> It is about the data set itself being generated in a biased way.
No, the ProPublica article is not about this at all. None of the articles I mentioned are. If so, they would have needed an entirely different methodology to demonstrate this.
For example, one might do what Lemoine or SA did and compare arrest data to NCVS data, or look at other sources of data which remove certain sources of bias.
Is this your definition? Why should I care about it as capturing anything reasonable? I know Northpointe excluded race from their model. Of course, this does not remove discrimination-we-care-about for what I hope are obvious reasons.
The Northpointe regression also probably has expected error of 0. So what?
—
The casual reader will come away thinking the algorithm is making incorrect decisions — and they would be correct. They are incorrect, in a relevant sense we care about.
—
I think where we differ is I did not expect ProPublica to solve the problem for us, but to point out the problem exists. They did precisely that.
Academics, in part because of the ProPublica piece, are now actively working on the issue. The fact that someone off the street might misunderstand the point is not really an interesting criticism, as it can be leveled against essentially any public-facing piece of text, on any topic.
Your article, for example.
Ilya, You’ve completely dodged the main point of both the article and the comment you just replied to. Instead, you merely stated a non-sequitur (that you don’t care about my definitions).
I’ll repeat the core point again:
My critique is that a casual reader of these articles will come away thinking that algorithms are making incorrect decisions, and that the solution is to somehow make more correct decisions and fewer incorrect ones.
What do you think a casual reader will think?
Elsewhere I asked if you’d be willing to gamble your beliefs on this. (See https://slatestarcodex.com/2016/12/31/addendum-to-economists-on-education/ for discussions of how a similar bet was set up.)
Illya –
Please quit this low-quality nonsense. Being rude without contributing any arguments, as you have now done multiple times in this comment thread, doesn’t convince anybody of anything except your rudeness.
Sorry, I edited the post. And I agree with you on what the casual reader will think, based on the ProPublica piece. (I think I disagree that they are being misled, though).
Ilya, I’ll reply to your edit.
By “incorrect decision”, I mean incorrect in the sense of predicting a black person is likely to recidivate when he isn’t, or a white person is unlikely to recidivate when they are.
With this definition, rather than the one you made up, do you think the casual reader will correctly interpret the article?
Are you willing to gamble on this prediction?
If you think the typical reader of my article will misunderstand something, can you state clearly what it is and why you think this?
I am not sure what a person off the street might say. They may or may not conclude this?
A person off the street is very confused about very basic probability things.
I definitely agree people off the street could be misled by the propublica article (or your article, or my paper, or…)
—
Why do you call the definition you mention “incorrect” — adapting propublica’s stance or is this your own? “Incorrect” is a loaded word.
Using the term “correct” to describe a prediction as matching reality is not “loaded”. It’s merely the dictionary definition of the adjective “correct”.
If you feel the need to get into some postmodern debate about the nature of reality, I’ll bow out here.
I don’t know what you mean by “matching reality.”
Do you mean like, we learn the true E[Y | X] response curve? Sometimes you want this, sometimes you don’t want this.
—
I have an algorithm that maps data to an answer. Another person has another algorithm. They both use data (e.g. reality). They have different properties. You chose some property based on probabilities you like, and called an algorithm with this property “correct.” Presumably you would say my algorithm, which lacks this property is “incorrect.” Presumably a third algorithm by some person who wrote a fairness paper at NIPS this year also lacks your property, so it will also be “incorrect.”
—
What is “correctness” to you? Might be useful to taboo “correctness” and talk about real properties of estimators, e.g. unbiasedness, consistency, robustness, absence of some effect (what my paper is about), etc.
You might say “when it comes to predicting recidivism, we want to learn the true response curve based on the data.” (e.g. have a consistent estimator for true regression parameters). I would say doing this would “match reality” (since we learned the true curve, after all), but I would disagree that this is what we want here.
Because the data is not generated in a fair way, essentially.
—
To give you a simple example that comes up in my neck of the woods, I want to learn if asbestos is bad for you, so I learn the true response curve of asbestos on health in folks who work. The response curve would then show that asbestos exposure is positively correlated with health, which is weird. (This is sometimes called the “healthy worker survivor effect.”)
Here the regression model is “correct,” but answering the wrong question. We want the causal effect of asbestos on health, and the regression model, while correct, isn’t giving us the answer we want. Because the data was not generated in a fair way — in particular folks who were sickened by asbestos dropped out of the work force.
Another method is needed (there are a few actually), that would presumably be “incorrect” by your lights, since they aren’t mimicing the true response surface of asbestos/health in the data you see.
Discrimination is a similar problem, we need to define what we want, first. What we want is not always the true regression function.
—
A lot of discussions/papers on fairness have this missing piece of not doing the required analytic philosophy first. They just pick some property they like, and just automatically assume it’s what we want out of a fair algorithm. A lot of these properties seem self-evident to the people who propose them, but interestingly, many disagree with each other. They can’t all be right.
I do think a lot of the philosophical legwork was already done in the legal literature, it’s just a matter of extracting it out of their language. I have a lot of respect for the philosophical chops of legal scholars.
Ilya, that’s fine. My criticism is that a lot of people read these articles and actually believe the algorithm is incorrectly computing E[Y | X].
The main point my article makes is “no, they are correctly computing E[Y | X], don’t let the word ‘bias’ mislead you”. You don’t seem to disagree with this point, correct?
Is your sole objection to my article my choice of topic?
Perhaps it might be easier if I told you what I thought of the propublica article, and you told me what you think is off:
(a) I think the article pointed to a real issue,
(b) I think the article formalized and addressed it poorly,
(c) I think laypeople reading it might come away with all sorts of wrong impressions about what’s going on.
I think these are probably uncontroversial points. The interpretation is, I suppose, where we disagree.
—
I think they did a lot of good bringing attention to a possibly real problem.
I don’t think they had a malicious intention, nor do I think they are stupid. They didn’t solve the problem properly but solving it properly is very hard.
I don’t think it was their intention to prey on laypeople’s misunderstandings to generate clicks/outrage/$$$/who knows.
I think you disagree on some of these.
—
I think if some people read the article and conclude that Northpointe incorrectly computed E[Y | X] and that this is a problem, then they correctly concluded that there is a problem, but are incorrect on what the problem is, and for an incorrect reason.
I also think that if some people read the article and conclude there might be a serious problem with how regression models are used for parole and sentencing, _without understanding the details_, I think that would be a good conclusion, and arguably the entire point of the article. You can’t expect a higher bar than this from a layperson.
—
I also think a layperson is much more likely to think of “bias” in the way we informally mean when we talk about discrimination than in the “expected error” sense. So I wouldn’t expect a layperson to be mislead about propublica asserting things about “expected error” bias, as they don’t know what that is. They likely don’t even know what expected value is.
“Sometimes you want this, sometimes you don’t want this.”
Am I understanding correctly that you are arguing there are cases where we want to sacrifice calibration/precision of an algorithm like this in favor of other values? If I’m misunderstanding, can you clarify, and if I’m not, can you please explain?
Because I can’t think of any circumstances in which I set the goal “Tell me how likely outcome X is for individual/group Y” in which my stated goal is better served by being LESS accurate…
@Trofim_Lysenko
The goal “predict measurements of a variable for a certain population” is tautologically best satisfied by making your model as accurate as possible if “accurate” means “good at predicting measurements of a variable for a certain population”. But that goal is usually an approximation to/instrumental step towards another: see Ilya’s asbestos example.
@rlms
The Asbestos example shows that in some cases using a regression analysis cannot answer a question like ‘how dangerous is asbestos for me?'”. The equivalent question here is “How likely is person X to reoffend?”. I have to say that so far, stucchio’s point seems to be the most relevant.
We know the regression analysis can’t answer the question about asbestos because it says that asbestos exposure is correlated with health, I can install it in all my buildings and observe that later on the people living there will have elevated incidence of mesothelioma relative to the population living in the non-asbestos buildings.
Likewise, if a predictive algorithm fails to correctly predict recidivism, we would expect the rates of false positives and false negatives to be worse than or the same as chance.
The goal here is “correctly identify high risk-of-recidivism individuals” with the larger goal of “minimize repeat offenses” which leads to “minimize crime”. So far, I have not seen a response to Stucchio’s claims that in order to tweak it to get the desired results (equalized false pos/false neg rates across races) you’ll have to accept either a 9% more crime from reoffenders, or higher rates of low-risk-of-reoffense whites getting harsher sentences.
To my mind, the obvious thing to do then is to either refute Stucchio’s assertion and show that we can equalize those rates without the effects he claims, or bite the bullet and argue explicitly that, for example, the 9% crime increase is worth it for the sake of the principle of justice and not being overly punitive to those who dont’ deserve it due to structural oppression or what have you.
“Am I understanding correctly that you are arguing there are cases where we want to sacrifice calibration/precision of an algorithm like this in favor of other values? If I’m misunderstanding, can you clarify, and if I’m not, can you please explain?”
The way to understand our proposal is via these name-swapping experiments people run. The idea is, if we send resumes to evaluators, and you keep the resumes the same for one evaluator, and swap the name from white-sounding to black-sounding, but keep the resume the same otherwise for another evaluator. If you see a difference, that means something bad is happening — evaluators are using race itself, which seems like it shouldn’t be relevant, controlling for qualifications.
(This is an oversimplification, because the way I said this makes it seem like you should drop race as a feature in your regression. But this doesn’t work, and explaining why will quickly get us into the weeds of the proposal. Basically it’s proxies for race, or “causal pathways” as we say, that matter).
It is quite simple to construct cases where the optimal regression does pay attention to the name. But we don’t want it to do that. We want it to do as well as possible, as long as it doesn’t do that.
—
Re: “and show that we can equalize those rates without the effects he claims, or bite the bullet and argue explicitly that, for example, the 9% crime increase is worth it for the sake of the principle of justice”
I think you are still trying to judge a non-regression problem by regression standards. The point of the asbestos example (a real example in epidemiology, by the way), is that its easy to construct cases where regressions are entirely irrelevant. As in, we shouldn’t use any part of them as a yardstick for success. Why should we? They are giving the wrong answer.
—
Equalizing conditional odds and so on are all regression concepts.
—
Your intuition of “well, if we want to check if asbestos is bad, run a trial” is a great one! A good related question to get at discrimination is “what trial might we run to check for it”. This is how we started too.
Two points:
1. Are you saying that you want to rule out rational discrimination? Imagine the case where the only information you have about someone is his race, and where race correlates with unobservable characteristics that are relevant to what you are doing. Suppose, for instance, that on average medicine A works better for blacks and medicine B for whites, but for some whites and some blacks it’s the other way around. You haven’t yet figured out what the underlying characteristic is that determines which medicine works better.
Am I correct in interpreting what you wrote about “we don’t want” as saying that, in that context, you should ignore race in deciding who to give which medicine to? I assume that wouldn’t be your actual view in that situation. If so, what defines the subset of decisions for which what you wrote applies?
2. The name switch example you offer has a problem–names correlate with things other than race. Suppose you picked a white name that is common for the children of college educated whites, a black name common among high school dropout blacks. If switching names affects the results, that might mean discrimination by race but it also might mean discrimination by parental education level.
EDIT: I’m going through the linked slides that were posted while I was at work. It looks like they may address part of my point about loss of meaningful information when you bar “proxies”.
@Ilya Shpitser
First, thanks for replying to me. I know you’re dealing with a ton of simultaneous critiques/questions and I appreciate that you’re trying to clarify your position here.
“The way to understand our proposal is via these name-swapping experiments people run.”
Right, I read through your paper. The problem I see with that analogy is that the test you chose in the paper very clearly specifies that ONLY race can be changed, and in the counterfactual everything else must be the same. That means by definition that any and all PROXIES for race must be held constant. If you throw out proxies for race as well as race itself you are no longer abiding by the terms of the test. This is even more important if “proxies for race” include variables that actually provide us with meaningful information. If I’m devising a test for prospective truck-unloaders at a warehouse (standing in a truck and manually hauling crates out until they can be placed on a pallet or conveyer belt and automated assistance can take over), it will probably involve ensuring they can lift and maneuver a certain weight of boxes unassisted for an extended period of time without slowing down. It turns out that “good upper body strength and endurance” is a pretty decent proxy for sex. If I throw that data out as invalid, I’m also discarding a fairly important piece of data if what I care about is “Pick people who will be able to shift heavy boxes around fast”.
“As in, we shouldn’t use any part of them as a yardstick for success. Why should we? They are giving the wrong answer.”
I’ll be honest, I do not understand your claim here at all. The yardstick for success in the asbestos example is “Am I reducing the number of people getting sick?”. That’s why it’s so easy to demonstrate that the regression is failing to give you the correct result. It fails the yardstick, because when we follow the prescriptions of the regression, more people get sick, not less. The yardstick here is “Are we reducing the number of recidivist offenses over using a different algorithm or not using the algorithm at all?”. So if it fails the yardstick, we’d expect either no difference in recidivist offenses or an increase. I fail to see how that can possibly be “the wrong question” in this context. Concerns like “people can have long criminal records despite being innocent because of police bias” (assuming for the sake of argument that this is a factual claim) are properly addressed by addressing problems with policing and arrests, not with design of algorithms used at sentencing.
I’m sure you understand that just because the same criticism can be levelled against other texts, it doesn’t follow that this specific text should not be criticized at all.
Right?
I get that you think the propublica piece did more good than harm, despite its flaws; I get it, and it’s fine. Some people still thought these flaws were worth exposing for the greater public. That’s fine too. Why the flamewar?
By the way, this a very confusing thread. Ilya seems to be arguing that nobody understands what bias, or incorrect, or reality, or anything means (except himself, of course)…
…while at the same time arguing that nobody is being misled by what journalists are writing. If this discussion is so complicate, what are the odds that the journalists got it right in the first place, so as not to mislead their readers?
I am not arguing that nobody understands bias. Lots of people understand bias. Understanding bias is one wikipedia click away. This is not esoteric knowledge by any means.
I am arguing a specific person doesn’t understand the difference between two types of biases, based on very specific things that specific person said, which were wrong.
—
I am also not arguing that people aren’t being misled by reading stuff online. I am saying that assigning fault for them being misled to journalists (because journalists are being deliberately misleading for various ends) is probably something I will generally disagree with. Serious journalists rarely do this.
—
I really don’t think you understand my position, but maybe that’s fine.
I think I am getting sufficient number of meaningful and interesting questions that I may soon declare “thread bankrupcy” and ask if people might be willing to take it to emails, with a longer turn around time.
But very quickly:
(a) The aim of the proposal is not normative but operational. Which is why I went with the formula of “sometimes we want something and sometimes we don’t.” In cases where heart medication gives side effects to [ethnic group] we definitely want to take ethnicity heavily into account.
It is not my place to tell you which paths are fair (perhaps all are fair for heart medication). My place is to tell you what to do if you decided the direct path from race to hiring is bad, and you want to run a regression to make hiring decisions for you.
—
(b) We do not use the precise legal definition in our procedure, because our procedure aims to avoid discrimination on average. There is a unit-specific version of this, to be addressed later. This is why we don’t fix things to constants, but leave post-race characteristics at the distributions they would have had, counterfactually, had the race stayed the same, but changed for the purposes of the direct path.
—
(c) “are properly addressed by addressing problems with policing and arrests, not with design of algorithms used at sentencing.”
It’s not that I disagree, but this is hopelessly unrealistic. Compare also to: “problems with sampling bias in surveys that determine people’s opinion of who they will vote for are properly addressed by asking every US citizen their opinion” or “problems with learning whether smoking is bad for you are properly addressed by running a randomized controlled trial on smoking.”
It is unrealistic to survey everyone, or do RCTs on probably harmful exposures, or get the police not to be awful in some locales. However, by being smart with data analysis algorithms, we can compensate for these problems, anyways.
Perhaps, but I don’t yet understand how.
Take a simple case which I think, although I might be mistaken, is the sort of thing you are thinking of. The legal system in a jurisdiction is more willing to arrest and convict a black than a white on the same probability of guilt. Probability of reoffending is estimated using an algorithm whose input is number of past convictions. The result is that a black and a white who have both committed ten burglaries and are both equally likely to commit another get recorded as having different past convictions, the white having been convicted for five of his, the black for six. So the algorithm predicts the black is more likely to offend. And it appears to be correct, if we measure offenses by convictions, because when both of them reoffend the black is more likely to be arrested and convicted. Is this the sort of thing you are thinking of?
If so, the problem is how to avoid it. The direct way would be to find some measure of legal system bias, but none occurs to me. A tempting solution is simply to look at what percentage of blacks and what percentage of white get arrested and convicted and attribute any difference to legal system bias. But that’s wrong because it depends on assuming that blacks and whites actually offend at the same rate, and there is no reason to assume that.
Part of your solution seems to be to forbid the algorithm from using race as an input, even though that might improve both its apparent accuracy and its actual accuracy. Beyond that you seem to want to forbid it from using proxies for race. But if blacks and whites actually offend at different rates, then number of offenses is itself a proxy for race.
I haven’t adequately followed all of this long discussion, but part of your point seems to be to demand a causal link for any variable used. But it’s hard to think of a proxy for which one couldn’t offer a possible causal link, and it’s hard to see how one could do better than that. If we had a fully adequate theory of causation, after all, we wouldn’t need to use statistics, we could just derive probability of reoffending from the accurate model.
David, I think I am too tired for a full answer to your very good question, but a quick note: the fact that we don’t necessarily forbid the use of race as input is an essential feature of the proposal. We forbid the use of race via certain “bad” causal pathways only.
@DavidFriedman, if you want to detect biased measurements in one part of a process, the typical way is to look for other proxy measurements that must (by fundamental relationships) exclude the bias.
For example, if you think (as Ilya seems to) that police are biased in arrest rates against blacks, you should look for crime measurements that exclude the police. For example, the NCVS or crime *reports*. In fact, when you analyze the data this way, you discover that arrests and reports are pretty well correlated.
Here’s a blog post where this analysis is attempted, though the actual data strongly disagrees with Ilya: https://slatestarcodex.com/2014/11/25/race-and-justice-much-more-than-you-wanted-to-know/
Another thing to do is look for fundamental relationships that should be reflected in the data (absent bias), but are not. Here’s an example: most crime is intraracial. So now imagine blacks were more likely to be arrested than whites for similar crimes. That would suggest that crimes against whites (which are mostly also perpetrated by whites) are disproportionately not being resolved.
Here’s an example of this technique in action, albeit not in a criminology context: https://www.chrisstucchio.com/blog/2017/sequential_conversion_rates.html
The short answer is that there are a lot of statistical techniques that can be used to measure bias in measurements. Many of them have been studied, but very few of them explain crime disparities. I have no idea why Ilya keeps appealing to police hassling black men, since the data doesn’t support it.
http://www.nationalreview.com/article/451466/police-violence-against-black-men-rare-heres-what-data-actually-say
@Ilya
I’m perfectly fine taking this to e-mail if you’d prefer. My personal e-mail is a gmail one for brerwolf. I’m re-reading the paper you co-authored now specifically to try and better follow your claims about how to preserve useful data from proxy inputs (that is, inputs that are correlated with the sensitive characteristic but also strongly correlated with with the probability you’re wanting to determine), and will probably have follow-up questions based on that re-reading.
On the big picture question of system reform and fixing pieces other than sentencing decisions I have to strongly disagree that I’m being unrealistic. I mean, are you basically of the opinion that the police and court system process blacks have not changed in the past 40-50 years? That there’s no real difference in bias or prejudice between Bull Connor’s Birmingham PD circa 1963 and the Baltimore PD circa 2018? I think that’s both unduly pessimisstic and not supported by the data we have to hand.
EDIT: Oh, and for those interested in the specific COMPAS inputs, I believe I found the actual assessment form. obviously we don’t know how these are weighted or processed, but as far as raw categories of inputs this seems to cover them: https://www.documentcloud.org/documents/2702103-Sample-Risk-Assessment-COMPAS-CORE.html
“Is this the sort of thing you are thinking of?”
Generally, I am worried that if one is training regressions for predicting recidivism (or loan defaults, or…), the questions those regressions are answering might not be helpful for the kinds of decision-making we want to do, if the data fed into those regressions has various selection biases built in.
This is similar to how an asbestos/health regressions cannot guide decision-making of the “let’s avoid asbestos in buildings” type, if given biased data, where sick people are missing.
Your question is, what sorts of biased data might we have for recidivism. I don’t actually have a full answer — but there are people who work on this type of data who probably have a better idea. Some examples though:
Races could have vastly different rates of minor offenses (like traffic stops, “resisting arrest,” disrespecting cops in various ways). So a black person and a white person who are from similar backgrounds and have similar personalities would look very different in terms of their criminal record.
Drugs popular in African American communities carry much larger sentences, essentially for no good reason, compared to other types of drugs. So incarceration records might look different, for no good reason.
Recidivism is defined as a subsequent arrest, not subsequent conviction. It is easy to imagine that some races would be arrested more (cops love to arrest folks even if they know a conviction will not stick, as an intimidation tactic, there was a recent news story of someone getting arrested for criticizing an official getting a raise).
People who work on fairness disagree on what to do when faced with trying to do regressions from data with biases of the above types in it. One difficulty is defining what bad feature of the data represents “unfairness” and how to make this bad feature go away.
Our take is we should think about things people intuitively think shouldn’t happen — like directly using race in a decision (because what one’s race is, directly, is not relevant for recidivism prediction or loan decision). Then we say, ok if these things shouldn’t happen in data generated from a hypothetical “fair world” where above biases are not built in to data generation, let’s try to find a world close to ours where those things people intuitively think shouldn’t happen in fact do not happen.
I suppose, one way to think about the problem is this:
We want to find a way to get data from a “fair world” where we can just do regression on the data in peace, and use it to decide who to give loans to and who to let out of jail.
—
edit: An important point about selection bias — the type of selection bias you have in your data is not a testable claim. In other words, if I think people who get sick from asbestos are missing at random, this is an assumption, not a function of the observed data. I needed to point this out because one naive question one might have is whether we can, if given some data, run some sort of procedure on it to figure out what sorts of selection biases it has.
Information on biases in data generation is “side information” from domain experts. This is similar to how causal models are “side information” on the causal story behind the data.
I wanted to point this out, because if one were a hard-nosed Popperian empiricist, this entire business would look insane and unscientific. However, a lot of information that guides data analysis you want to do is not in the data itself, and cannot be checked with the data.
One pretty obvious problem with correcting for an assumed bias that can’t be tested for is that the people assuming the bias may be wrong, and in fact may just end up importing the researcher’s biases into his conclusions in a way that looks impressively mathy from the outside.
Absolutely. Which is why you use trusted domain experts for your side information.
—
But in general, statistical analysis a lot more tentative than a layperson might think it is, for these reasons. Even ordinary kind of analysis, not to do with discrimination.
“Which is why you use trusted domain experts for your side information.”
But the problem with that is that expert opinions are the second lowest tier of evidence, second only to layperson opinions and don’t have a great track record. For example: the fundie churches manage to line up hundreds of people with biology related academic titles to back ID and creationism vs evolution.
Perhaps I missed it, but have you explained at some point in the thread what you mean by that? It isn’t clear to me.
Also, are you saying that that is what the ProPublica article claims or are you saying that their claim was false but useful because it got academics to ask the right questions?
I would say propublica didn’t address the problem properly. But this is a very high bar for a journalist who knows some data science to clear.
I am fairly certain the journalist in question self-described their goal as getting academics interested.
—
An example of data generated in a biased way is one I had above, where folks who were sickened by asbestos dropped out of the workforce, so you only see people in your data who were unusually healthy or resistant to asbestos. That is, the data generation process is suffering from confounding/selection bias. If you aren’t careful this will lead you to a silly conclusion about the relationship of asbestos exposure and health.
You might imagine recidivism prediction data suffers from similar issues due to all sorts of bad things happening in policing and recording and so on.
But then you might happen to check and the causal issues may run in the opposite direction you expect or there might be no effect at all. You’ve shown a lot of “in theory maybe there is a causal model that makes this issue different in a specific way” but as far as I can tell you’ve provided little evidence for why we should expect the causal errors to run in one direction or the other.
Tell me if I fill in the details wrong . You would say there are obvious reasons why we might expect something like: black –> lives in poorer neighborhood –> gets more parking tickets because police go there more –> some civil or criminal penalty. And then this last observed bit will influence the score given to predict recidivism. I only skimmed your paper, but if I’m understanding correctly, an inference that includes only the last piece of data and does not correct for this pathway being discriminatory (in the intuitive sense) will likely be discriminatory in your model.
We tried our stuff on the part of the COMPAS data that is freely available, and the preliminary results aren’t encouraging.
—
I think you got the idea in our paper right. Basically we first sit down and agree on which paths are “bad.” Then we do regressions in a way that make “bad” paths go away. A common “bad” path is using race directly in the outcome, not mediated by other variables. But other bad paths are possible, we run through some examples.
@Ilya Shpitser
Sorry if I missed something relevant to this. But a very speculative question. Say rather than a acyclic directed causal graph, you have a causal graph where part of the phenomenon is described by a time varying phenomenon where two variables affect each others changes over time. So the graph either has cycles or you could have each value of a variable at a given time be treated like its own node in the graph and you’d get sort of a long criss-crossing ladder subgraph. Is there any obvious way to extend the idea of bad paths to this case? Not that necessarily would work well mind you. Just some way or ways.
Yes. Path-stuff extends to general cases that include time-varying confounders and so on. I have papers on this (either in annals or cognitive science, depending on what sort of style you like to read). If you are seriously interested, I can probably arrange to chat over skype also.
@Ilya Shpitser
Thank you for the offer. I don’t yet think I’d be spending your time well on skype. I may ask in the somewhat distant future if I find the time to read some of your papers more carefully. Maybe I’ll try to slip one of your papers in to the journal club in my lab. Your expertise is in a type of modeling I don’t work with, so it would take me some time and effort to ensure myself that I couldn’t answer any questions I had from using the tools. I usually find that’s better for my long term gains than asking someone even though asking can be faster.
For future reference:
https://arxiv.org/abs/1411.2127
https://arxiv.org/abs/1205.0241
The latter one has a little notation bug in an example (not important to central claim).
I didn’t agree with your definition of discrimination, but now that I write out the argument against it I think I’m more convinced it is right.
We’ve got an instrument that spits out a number which supposedly predicts whether or not you commit another crime. And if this number is 30%, 30% of black people and 30% of white people with that number both reoffend.
Your argument seems to me to be that to get this property the model had to add in a race based term, and that we should consider this discriminatory. And as it goes this seems intuitively convincing.
But now suppose we remove this term. And now those people getting 30% marks are less likely than that to actually reoffend then 30%, say 25% of them are. Don’t they have a colorable case that “this algorithm is mispredictive for my race, and the judges are confining me for longer as a result”? And if there are proxy terms tightly correlated with race that get added, now things get hairy.
I think that having the same predictivity for black and white defendants is a more defensible criterion. If 70% of those marked as 70% reoffend, whether white or black, and the judges sentence them accordingly, then if the black defendant was white and got the same number they would be treated the same.
“I think that having the same predictivity for black and white defendants is a more defensible criterion.”
What we do in practice is partly a political problem, and that’s a separate can of worms.
However, one relevant slogan speaking to your proposal is “equal pay for equal work.” We don’t say “equal pay for both genders,” right?
—
I think my take is, we should imagine a hypothetical world where everyone is race-blind in the right way. And we should look at the closest world to the actual world we observe with this property (race-blindness-in-the-right way). Probably we will predict less well in various ways if we do this, than had we used the observed data as well as possible (e.g. in the words of the article author “the regression isn’t correct.”)
But I think we want race-blindness-in-the-right-way.
By the way, it’s not true that this has to do with removing regression terms — it’s a bit more complicated.
Here are some slides someone at Berkeley apparently did based on our draft (I didn’t actually realize this until fairly recently):
https://www.stat.berkeley.edu/~nhejazi/present/2017_fairml_shpitser_withnotes.pdf
@Ilya Shpitser
The problem is that there are people who say the former and believe that the latter statement follows from this (and also people who do not believe that, of course).
So the end result is that you have people using a very agreeable slogan, while they actually mean something more specific that far, far fewer people agree with.
This is commonly called a motte-and-bailey here…
Having the same goal doesn’t mean that people can agree on the path to that goal.
There are people who want to reduce all bias as much as possible & there are those who believe that we (temporarily or permanently) need to introduce bias to counter other bias and/or preserve existing bias that works in the ‘right’ direction.
Many of the disagreements among people are not about the goal they claim to have*, but the path to it.
* Although I believe that quite a few people don’t actually have the agreeable goals they claim to have, but far more selfish ones that they legitimize by combining altruistic goals with cherry picking of facts and other fallacies. So they reason themselves into believing that achieving the selfish goals will also achieve the altruistic goals.
The bigger picture issue is algorithmic fairness. As you are clearly aware, there are multiple possible definitions of “fairness” including both the ones you focus on (statistical bias, calibration) and what ProPublica observes (difference in FP/FN rates).
There is an interesting and subtle discussion to be had here, for example in the paper you cite and elsewhere in the academic literature, about what is the right definition and the resulting tradeoff. Your article does mention these more interesting issues, but does not seem to explore this in depth and ultimately takes the primacy of your particular choice of objective function for granted (e.g., use of terminology such as “correctness” vs. “wishful thinking”).
This choice is your personal politics/philosophy, reasonable people (e.g., the ProPublica authors) may differ and offer other objectives.
It’s not about politics/philosophy so much as it is about topic. The topic of my article is journalists like ProPublica misrepresenting an academic topic.
I did not advocate for any choice of objective function. I advocated against calling predictions inaccurate when they are merely inputs to a decision process one considers unfair. One should call predictions inaccurate if they fail to match reality.
Note that if ProPublica wrote an article “A widely used algorithm accurately predicts blacks are a lot more likely to commit crimes if let out on parole, but we think you should let them out of jail anyway because it’s unfair”, I would not call them misleading.
But that’s not the article they wrote.
Also if you’d like to read my take on algorithmic fairness issues, rather than the media misleading people, I’ll be discussing fairness (in the Indain context) at 50p in Bangalore next month.
“I did not advocate for any choice of objective function.”
Implicitly, that’s exactly what your article does. You cannot separate the outcomes of something like COMPAS from the objective function you feed it.
For example, you say:
“ProPublica labelled the algorithm as biased based primarily on the fact that it (correctly) labelled blacks as more likely than whites to re-offend (without using race as part of the predictor), and that blacks and whites have different false positive rates. This is actually just a necessary mathematical consequence of having an unbiased algorithm — no decision process, whether implemented by an AI or a sufficiently diverse group of humans, could possibly avoid this tradeoff.”
This is a result of the objective function they chose and could be corrected by choosing a different one. COMPAS decided to minimize the error in estimated recidivism probability full stop. They could have instead chosen to minimize this with a penalty applied for differentials in FP rates among protected classes.
It’s also worth noting that you conflate statistical bias (in the second sentence) with the colloquial meaning of bias (in the first sentence, in reference to ProPublica) in this paragraph. You clearly know the difference (you do a great job of explaining it in the article), so it’s odd that you choose to ignore it here.
Yes, they could have put their thumb on the scale. But then they’d be using race as an input to their algorithm, which would subject them to direct discrimination claims.
>They could have instead chosen to minimize this with a penalty applied for differentials in FP rates among protected classes.
COMPAS does not have a utility function. COMPAS is computing P(will recidivate|data), which is basically just a posterior estimate. (Not exactly, mainly because for some reason they didn’t do isotonic regression to it.)
In theory a decision process using COMPAS could look like:
decision = argmax_{send_to_jail} U(P(recidivate), isBlack, send_to_jail)
for some U(…) that varies based on isblack. But in reality, that decision process actually happens in the mind of a judge, not in COMPAS.
Are you advocating that COMPAS should lie to the judge in order to manipulate this process to your favored outcome? Should other parts of the criminal justice system, e.g. crime labs, also deliver misleading information to judges in the interest of fairness?
You use a lot of loaded words. Is not reporting the true correlational asbestos/health link in a public health study “lying to the public about the health effects of asbestos”?
The predictions fail to match reality, because the data fed into the algorithm is a biased estimator of the generating function.
The algorithm accurately predicts whether blacks are more likely to be convicted of a crime while on parole, NOT if they are more likely to commit a crime.
Juribe, why do you believe that one is not a good proxy for the other? Do you have any evidence for this?
As cited upthread, there is quite a bit of evidence suggesting that arrest data/conviction data is more or less accurately correlated to offense data. The latter can be measured via crime *reports* (e.g. NCVS) and somewhat indirectly by data on victim demographics (most crime is committed by someone similar to you), in order to get an estimate of how biased arrest data might be.
The general result I’ve seen is “it’s not very biased”. Do you have evidence to the contrary?
What if ProPublica wrote an article saying “a widely used algorithm is much better at finding reformed white convicts than reformed black convicts?” That strikes me as a pretty accurate summary of the problem.
I am torn between criterion (1) and criterion (2), but surely there is a fact pattern which would move your sympathies to (2). Suppose that we have 1,000,000 black prisoners, of whom 600,000 will reoffend, and 1,000,000 white prisoners, of whom 400,000 will reoffend. The algorithm returns:
High risk:
999,000 black prisoners of whom 599,600 reoffend
1,000 white prisoners, of whom 600 reoffend
Low risk
1,000 black prisoners of whom 400 reoffend
999,000 white prisoners, of whom 399,600 reoffend
This meets criterion (1) almost perfectly. Is nothing wrong here?
@DavidSpeyer, that’s another fair way to describe it. I think we can all agree that making algorithms more predictive is a desirable goal.
Literally every single person involved in the process is incentivized to make roc_auc go up. If they can figure out how, NorthPoint can go sell COMPAS 2.0 for more money. Communities can release more reformed criminals while keeping more dangerous ones in jail. The statistician building the algo can put “improved roc_auc from 0.65 to 0.85” on his resume and demand a big pay raise.
The only people who might oppose this are prison guard unions.
In the case of lending (which is what I work on), I can literally deploy an algo with higher roc_auc and my company makes more money.
The only problem is that doing this is hard. It’s a lot less “we’re such woke baes standing up to racism” and a lot more “I ran a spark job that computes a KDA on geolocation data and discovered new crime hotspots”.
@stucchio We certainly want to minimize error, but I think we also want to minimize the ability for a person to recognize in advance that he is at elevated risk of a certain sort of error.
Would it not be rational for a white convict, hoping to go straight, to think “77% of white folks in my position got a low risk score. My odds are good, I better get my GED and keep out of trouble”? And for a black person in the same position to think “only 55% of black folks in my position got a low risk score. The algorithm can’t tell the difference between me and the gang bangers. I’m screwed from the start!”
I think this is worse than if 65% of both populations got a low risk score.
Numbers pulled from the first bullet point of Pro Publica’s analysis. My understanding is that you object to Pro Publica’s framing, but you aren’t saying their literal numbers are wrong.
@David Speyer
Wait, how do you think risk scores are being used? The article indicates they are primarily used at sentencing. Each convicted person will know the result of their sentencing which may or may not take into account the risk score. From their point of view, there should not be any uncertainty as to what group they are in. Your story makes no sense.
And if you’re saying they take into account what risk score they might get before they commit any crime, I find that highly implausible. And even if they did, knowing you’re more likely to get assigned to the high risk group (who gets more severe punishments) would normally be assumed to either have no effect or deter crime. It would be incredibly surprising to see an increase in the punishment if caught and be more likely to commit a crime because of that.
This behavior would be completely irrational.
The algorithm assigns each individual a risk score based on their behavior and history. An individual’s race is not used in the score at all. A black person who gets his GED, lines up a job for when he gets out of prison, and tells the psychologist evaluating him “no means no, rape is bad, never again”, will have the exact same risk score as a white person who does the same.
Blacks have a higher risk score on average because they behave differently.
I agree that their numbers are mostly correct (the p-values they get when they looked for statistical bias are incorrect, however – failed to correct for multiple comparisons). But it’s pretty weird how so many readers – yourself included – come away with totally factually incorrect beliefs.
To quote from the propublica article, the score is also based on questions such as “Was one of your parents ever sent to jail or prison?” “How many of your friends/acquaintances are taking drugs illegally?” and “How often did you get in fights while at school?”. All of these are things which the convict brings with them from his pre-crime life, and are different for defendants of different races. Unless you mean that “if the black convict is smart enough to lie and give the answers to these questions a white person would give”, he will not get the same score as a white person, no matter what he does in prison.
For the record, things which are clearly wrong in the ProPublica article (and which I have tried not to repeat):
* Saying that the algorithm predicted people would reoffend, rather than saying that they had a high risk. “High risk” meant something like 50-60%, it shouldn’t
be viewed as a failure of the algorithm that they didn’t all reoffend.
* Not making very clear that simply adjusting scores by a racial factor would lead to more reoffending criminals being paroled — there is no reason to think that the black convicts who would benefit are the reformed ones who are being missed.
so there are no black people and white people who would already give the same answers to these questions?
maybe the criteria themselves are dumb (especially as you can just lie about them, apparently), but what’s the big difference between this and, say, crimes committed? According to you it’s because these criteria are pre-crime, but that’s not that big of a difference, is it?
@DavidSpeyer, parents going to jail might be problematic. Plenty of white people have parents who went to jail too, though. I have no strong opinion on it.
As for things like being friends with drug users and getting into fights, those are things an individual can control. So yes, COMPAS does incentivize people to avoid drug users and fights. Similarly criminal history – COMPAS gives you a higher risk score if you commit more violent crimes, and thereby incentivizes you to commit fewer crimes.
If blacks disproportionately choose to commit violent crime and be friends with habitual criminals, how is that the fault of COMPAS? And how is using these choices that any individual can make somehow unfair?
I don’t have a problem with the idea that we should consider different definitions of fairness when deciding whether or not to parole any given offender, or set parole policy in general.
I take a bit of issue with the idea that an algorithm asked to calculate a recidivism rate, and appears to do so accurately, is called unfair or biased; it answered exactly the question it was asked to.
Indeed, blaming an algorithm for disliking how it is used is like blaming shovels, because someone got smacked on the head with one.
“Indeed, blaming an algorithm for disliking how it is used is like blaming shovels, because someone got smacked on the head with one.”
But if the government decides that a particular shovel is the tool they’re using, and that shovel smacks a protected class in the head more than another shovel might, saying, “This shovel is good at digging holes” is a non-sequitur. The point isn’t that you shouldn’t use any shovels, just that you should think carefully about the shovel you choose.
Yes, I think Matthew stated the issue concisely.
There is more to life than unbiased regression models. Even for things less tricky than discriminatory bias problems.
It feels to me like there are two different things going on here:
a. There’s a set of statistical/correct prediction issues that come up because of problems with the data or selection bias or whatever. Specifically, our crime data (and recidivism data) comes from the criminal justice system, which may have all kinds of biases built in. Some may be explicit racial biases (the cops arrest the black suspect first), others may be other kinds of correlations (blacks tend to be poorer, poorer people can’t afford private attorneys or bail, bail and private attorneys make you less likely to end up convicted or pleading guilty, etc.).
These are pretty hard to untangle. Worse, there’s a huge amount of room for sophisticated statistical techniques that are easy to get wrong, especially when the researcher or organization using them really prefers some answers to others.
b. There’s a question of values or law or policy, having to do with what our goals are with a predictive algorithm. For example, would we prefer a predictive algorithm for making parole decisions that:
(i) Minimizes errors (according to the data we have, maybe with some attempted corrections for flaws in the data)?
(ii) Minimizes errors subject to constraints on what information may be taken into account? (Race, religion, juvenile history, history of psychiatric treatment, etc.)
(iii) Minimizes errors subject to some other constraints on the predictors’ recommendations (equal recommendations for matched black and white prisoners even when unequal recommendations would give a better prediction of whether they’d reoffend, longer recommended sentences for sex criminals to send the right message, etc.)
and so on.
I think it’s really important to treat these two issues separately. For the first issue, you’re dealing with a hard technical problem where neither the public nor judges can really do much more than try to follow along and understand what the issues are and what you can do to address them.
For the second issue, we’re talking about an actual public policy question involving a tradeoff. We need to make that explicit–there’s nothing about that that need exclude non-experts. We want to know whether we should prefer more accurate predictions or ones that exclude some information, or that give some kind of equality of outcomes at the cost of worse predictions.
A major problem with these algorithms, as I understand it, is that the users (judges) are mostly not at all sophisticated about statistics or machine learning. So you’ve got this magic black box that gives predictions about how likely someone is to reoffend, and it’s like it came down the mountain with Moses or something.
What I think we need to do is both to try to do the technical work to make sure we’re as accurate as possible in our data and predictions, and also to surface the policy/legal tradeoffs to the judges and legislators and voters. I think there is a *huge* temptation to keep some of those tradeoffs hidden away, because they involve tradeoffs between important values and get lots of people mad, but if we don’t surface those tradeoffs to the policymakers and the public, we end up leaving it up to the technical experts to make policy, and there’s no reason to think they’ll do a good job of it.
I disagree. Shovels are, by definition, for digging holes, and you should optimize for that. Any other problems are, practically by definition, not problems with the choice of shovel but with how it is employed. To extend the metaphor, if I’ve got people getting whacked over the head by misemployed shovels, the answer is absolutely, categorically not “Buy nice soft foam rubber shovels. Sure, they’re less good at digging holes, but now people won’t get hurt!”.
A lot of the claims about how the male white programmers are introducing bias in the algorithms only make sense if the people who make those claims believe that the shovels are broken themselves.
To me, there are two reasonable possibilities to intervene:
1. Improve the algorithms if they are broken/have a clear bias
2. Partially ignore/counter the algorithms, based on decent evidence of the bias you try to counter with the bias you introduce
Instead, a lot of people want to make the algorithms worse. I understand why, because it’s very hard to make an objective argument for (2), for the exact same reason why improving the algorithms is hard (plus other reasons*). By fudging secretly, you don’t actually have to defend what you are doing, so you can get away with adding whatever bias you prefer (and don’t have to prove that it does a good job at countering other biases).
However, is it good/fair/helpful to introduce secret biases that in themselves may be a huge source of unfairness?
* Some people want to intervene by canceling out the exact biases with counterbiases, while many others want to do it on the group level. The latter can create huge variance with group members that suffer from the negative bias, yet who do not profit from the positive bias & group members who do not suffer from the negative bias, but who do profit from the positive bias. Individualists tend to not like it when the suffering and privilege equals out on the group-level, but there is huge unfairness on the individual level.
Aapje:
I think the “white male programmers are building their biases into the models” is an easy rhetorical point that doesn’t really hold together when examined. (Among other things, an awful lot of those “white male programmers” are East Asian or South Asian.) While “ML algorithms trained on data with biases will make predictions that reflect those biases” is accurate, but it involves fairly subtle arguments about what biases exist (or if they exist). And that’s separate from policy-level decisions about whether we should accept less accurate predictions to avoid basing important legal or commercial decisions directly or indirectly on race.
From the bullet points in the article (https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm)
“Black defendants were often predicted to be at a higher risk of recidivism than they actually were.” and “White defendants were often predicted to be less risky than they were.” This sounds like bias to me?
I’d just add that your proposed explanation (black being more often in the medium risk category) sounds like a reasonable explanation for different false-positive rates. It is also deeply suspicious that PP makes these claims without plotting the data – with a plot of estimated risk vs actual risk (similar to calibration curves) for white and black, it should be abundantly clear whether the algorithm is fair or not.
This reminds me of Scott’s recent posts. When someone makes a stupid or mendacious argument that can be crudely rebutted, it should be crudely rebutted, even if more sophisticated analyses are possible. But it is wrong to claim that the crude argument is the whole story.
Don’t think the ProPublica piece was either stupid or mendacious.
You are asking too much of them. I would rather they sounded the alarm and were wrong about some stuff than stayed silent.
I think some of this comes down to how much stock to put in “conscious intent”.
If you write stuff you know will be misinterpreted, using language that’s technically false*, is it really important whether you also think to yourself, “Ha ha, now I’m going to lie to you!” as you do it?
Benquo put this better than I could here: Bad Intent is a Dispostion, Not a Feeling
Also, it’s tempting to round this
off to
Which is unlikely to be a popular attitude ‘round these parts.
(At least Arthur Chu was up front about it.)
*I haven’t read the ProPublica article, so this should be taken hypothetically.
FYI, the “Equality of opportunity” paper with broken links from the article above is here: https://arxiv.org/abs/1610.02413
A quick glance at COMPAS a while ago suggested that it was an overtrained mess, and recent investigators have suggested that it doesn’t even beat humans: http://advances.sciencemag.org/content/4/1/eaao5580
Snort.
Beating humans at this kind of a task is *usually* not hard: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.5825
Ilya: consider a scenario where we have an algorithm that is 100% accurate. It accurately predicts a higher reoffense rate for blacks than whites. Would using this algorithm be discriminatory under any circumstances? Why? Which circumstances?
The idea itself is discriminatory because being black is not a measurable category. Is blackness self reported? Is there a threshold for melanin?
Fair algorithms should work to correct the bias that already exist in society but have no basis in object reality (we are rationalists after all) so they would be harsher on privileged classes.
Yes, this would be ‘discriminatory’. Because what most people mean by ‘discriminatory’ has absolutely nothing to do with the statistical meaning of ‘bias’ (and pretending that the everyday meaning of the word has anything to do with the statistical meaning is deeply silly). What the average guy on the street understands ‘biased’ and ‘discriminatory’ to mean is that, in the cases in question, they treat black people worse than white people, whether or not that may be ‘justified’ by the data. And the assumption of progressives is *not* that black people and white people are equally likely to re-offend after prison release, or equally likely to pay back loans with comparable credit scores, or equally likely to do x, y, or z. There may be instances of individual leftists hopefully proposing incorrect things like this, but this is not the core of the argument. The argument is that even if you find statistical differences between racial categories in any of these regards, it is *fundamentally immoral* to treat individuals as exemplars of their racial groups, especially though not exclusively in cases where race is not inherently linked to the outcome in question and is clearly being used as a heuristic for a messy hodgepodge of socio-economic factors that would require hard work and independent thought to sort out on their own terms. I don’t think you’ll find many people on the left who are totally unwilling, in the face of convincing data, to acknowledge that algorithms that (indirectly) disfavor black people are more accurate or efficient than algorithms that don’t. Their belief system holds, rather, that it is morally wrong to use those algorithms. I can imagine arguments against this position (I am not a leftist, although I was raised by leftists and have a certain sympathy for their worldview). But those arguments are not being made here; instead we’re hearing a bunch of reasons why it’s more efficient and/or accurate to allow algorithms to penalize racial groups for things that covary with their racial group. That’s not going to convince anybody, because it’s not an argument against their position. And yes, this would extend in the limit to categories beyond race and to car insurance and health insurance and everything else.
Misconceptions like yours are exactly why I wrote my article; ProPublica and other journalists have led you to believe wildly incorrect things.
The COMPAS algorithm does not use race as an input. Neither does any loan approval algorithm.
The COMPAS algorithm treats every person as an individual. It uses a “messy hodgepodge of socio-economic factors that would require hard work and independent thought to sort out”. Data points include things like “how many crimes has this person committed before”, “were those crimes violent”, “does this person have a job lined up”, and “does this person admit to the prison psychologist that they feel lots of uncontrollable rage”.
Not using race as a feature does not avoid the problem, for the obvious reasons of race proxies. Common one is zip code in segregated places like Baltimore city, but there are others.
@Ilya Shpitser
I think that you’ve got three kinds of inputs that cause a racial difference:
1. Those that are 100% relevant to the (non-racist) goal. For example, running up credit card debt may correlate negatively very strongly with paying back loans. Black people may more often run up credit card debt, but that doesn’t mean that this parameter is merely a race proxy, because it also works for other races.
2. Those that only work because they correlate to race, like using the number of times someone rented a Madea movie as a parameter. Such a kind of parameter is not predictive for other races. So white people who watch Madea movies a lot would then not be worse at paying back loans.
3. Those that are a mixture.
Type 1 may indicate current racism, for example when black people are denied regular loans based on their race, so they resort to credit card loans. However, it can also indicate the after effects of racism that no longer is in effect or it can indicate that black people have sub-cultures that on average are different from other races.
Type 1 can cause feedback loops, which are IMO the only legitimate reason to reject them, based on an equality of opportunity morality. To reject such a algorithm, it is then not sufficient to have disparate impact, but it has to be shown that the feedback loops exist.
Type 2 treats people differently purely for their race, which is incompatible with equality of opportunity.
Type 3 is the unpleasant consequence of not always having good quality data.
The issue with trying to fix this is that erring in both directions is in some ways unfair. If a person has a history of paying back loans, then why should this person be (partially) treated like a person without such a history?
Ilya, it’s fine that there are other definitions of bias. But that’s totally not what is being discussed here (except by you).
Jonahkatz is one of the many people who read the ProPublica article (and many similar articles) and as a result believed entirely incorrect things.
My article aims to point out that a) the beliefs folks like Jonahkatz hold after reading them are incorrect and b) journalists are probably doing this on purpose. My issue is journalists propagating factually incorrect beliefs about what algorithms actually do, not disputes over assorted different conceptions of fairness.
If ProPublica wrote the story “There’s an algorithm that accurately predicts blacks commit lots of crime but we think you should ignore it’s factually correct predictions in the name of fairness”, then I’d have no problem.
One interesting question is whether race adds predictive power incremental to even a host of other factors. Across much of the social sciences, race remains a powerful factor that makes forecasts more accurate.
It’s evident, for example, that NFL teams use race as a factor in addition to non-racial factors in determining who to play at cornerback. (Blacks have filled all 64 starting cornerback jobs since 2003.) Whites and Polynesians who would appear to have what it takes to play cornerback in the NFL tend to get routed to playing safety instead. (Bill Belichick, perhaps the all time smartest NFL coach, used Julian Edelman at cornerback at times, but eventually determined he was most valuable at receiver.)
“Except by you.”
—
Earlier in the thread:
me: “Simple example: regressions will fail to parole folks with big criminal records. But in places like Baltimore it’s very easy to get a long criminal record for no better reason than cops loving to hassle African Americans even if they are doing nothing wrong.”
you: “If this were true, then it will show up as statistical bias.”
It seems to me the person who is confused about the difference between “statistical bias” and “selection bias in data generation” is you. I suppose it is also true you read the propublica piece. In the true “let’s confuse correlation and causation” style that you seem to be exhibiting here, you might then blame your confusion on having read the propublica article.
But I think just because you first read A, and then got confused about B does not necessarily imply A caused you to be confused about B. You probably were confused about B to begin with.
Ilya, if you want to see that my claim regarding algos learning biases in their inputs is true, you can validate it in a few minutes with numpy.
Here’s some rough pseudocode you can play with:
df = DataFrame()
df['race'] = Bernoulli(0.2).rvs(n_samples)
df['crime_propensity'] = Norm(0,1).rvs(n_samples)
df['violent_crime'] = (df['crime_propensity'] + Norm(0,25).rvs(n_samples)) > 1.0
df['num_tickets'] = Norm(df['crime_propensity']).rvs(n_samples) + norm(0, 0.25).rvs(n_samples)
df.loc[df['race'] == 1, 'num_tickets'] += 3
Now do regression on that dataset, but exclude crime_propensity from the regression (since that’s an unobservable variable). The input data is explicitly biased – every black person explicitly gets 3 extra tickets.
Yet relatively simple regression will learn and correct for the bias in the input data.
Let’s talk about asbestos.
In asbestos data, you will have missing data due to the fact that folks who drop out of the workforce are systematically different from those who do not. This is what I mean by selection bias in the data — your data is not a representative sample.
Now, if you use the data, as you have it, and learn the true regression curve relating asbestos exposure and health, your statistical bias, is (by definition) 0. Selection bias is still there.
Similarly, if I did have a representative population (without dropout due to poor health), but I used an incorrectly specified regression model for the asbestos/health link, I would have no selection bias in data generation, but I would have statistical bias in the estimator (due to model misspecification).
Selection bias and statistical bias are different. You can have the first but not the second, or the second but not the first.
—
As far as I can tell, you are proposing I look into a regression, and then another regression where I exclude a covariate. This is _not_ selection bias in data generation, this is just a marginal model (not including variables in a model is effectively marginalizing those variables out from your observed data distribution). This is again orthogonal to whether the distribution itself is a fair representation of the population you want or not.
The classical example of selection bias in data generation is survey data (and missing data more generally).
edit: omitting variables from analysis _could_ lead to things like confounding bias, if you are interested in causal analysis — but I don’t think you are talking about that here, you seem to just be interested in regression problems.
Ilya, you’re repeatedly harping on the possibility that a regression does not fully capture causality. No one disputes this.
Can you explain the relevance to this case, and how this relates to any of the examples in the article?
I kind of get the impression that you’re trying to Euler us. (In this sense: https://slatestarcodex.com/2014/08/10/getting-eulered/ )
You seem to be misunderstanding the difference between statistical bias and selection bias. I have yet to see you address this. Your python example did the opposite of convincing me. This showed up in your first comment here.
I think based on talking to you we seem to have some general disagreements about the intent of the propublica authors, but aside from that, I am not even sure we are speaking the same language, yet, when it comes to biases.
So either:
(a) I am misunderstanding you (so please explain, if you have time), or
(b) You are actually misunderstanding this, in which case I think we should have a conversation about biases, what you think propublica is saying, and it is actually saying.
Ilya:
The thing is, race proxies may themselves be relevant for what causes crime. To use a simple example, if you grow up in a high-crime neighborhood with a lot of gang activity, you’re probably a lot more likely to end up joining a gang. I’m pretty sure a lot larger fraction of black kids than white kids grow up in such neighborhoods. So if I find out what neighborhood you grew up in and use that to predict the likelihood you’ll commit a crime, I’m going to both be getting a good estimate of your race and also to be learning something actually relevant about predicting whether you’ll commit a crime. Similar things apply to family income, whether your parents were married when you were born/raised, whether you graduated high school, etc.
“The thing is, race proxies may themselves be relevant for what causes crime. ”
I agree. This is why I think fine-grained causal path analysis is important — we have to try to disentangle all that stuff.
Ilya, you’ve now convinced me completely that you’re eulering. You repeatedly hint at certain topics, act superior when someone misunderstands you and thinks you were referring to something else, and refuse to engage with the concrete discussion.
If you believe some particular bias is relevant to any of the articles under discussion, make that argument. If you want to appeal to things like “police hassling black men disproportionately” or “arrests are biased”, first engage with the data (that I’ve cited multiple times) and then make that argument.
Otherwise have fun. I don’t think you’re fooling anyone here, and I don’t plan to waste more time on this.
@Stucchio
I’ve disagreed with Ilya before on other issues, and I’m still somewhat skeptical of the proposed methodology for ensuring algorithmic fairness and the appropriateness of applying it to this particular case, but I am also entirely confident that he is not “eulering” in the sense of trying to snow us with specious jargon. I think you guys are mostly talking past one another.
@stucchio
I’m also am pretty sure Ilya is not trying to Euler anyone. He’s being a bit oblique in some posts, but I’ve seen how he posts when he actually isn’t interested in discussing something (at least, as far as I can tell from his behavior then) and this isn’t even remotely close.
I think my current take away is you seem to have a way of finding a link between not understanding something and ill intent. Thanks for your time, I appreciate trying to engage with me.
I did call your article awful, but it’s just one guy’s opinion!
Trofim, Quanta, as I said I’m happy to engage with Ilya at a concrete level.
I don’t see much point in responding to vague allegations that I’m confused and fail to understand selection bias or causal inference or whatever. I have no intention of engaging in some “prove your expertise” contest judged by him. The only contest I’m willing to engage in is a prediction contest (he was unwilling to do so).
If Ilya wants to say “I think selection bias exists in ProPublica’s example because of X, and it’s relevant to the discussion even though ProPublica never mentioned it because Y”, I’m happy to discuss. If t