There’s a social justice concept called “distress of the privileged”. It means that if some privileged group is used to having things 100% their own way, and then some reform means that they only get things 99% their own way, this feels from the inside like oppression, like the system is biased against them, like now the other groups have it 100% their own way and they have it 0% and they can’t understand why everyone else is being so unfair.
I’ve said before that I think a lot of these sorts of ideas are poor fits for the one-sided issues they’re generally applied to, but more often accurate in describing the smaller, more heavily contested ideological issues where most of the explicit disputes lie nowadays. And so there’s an equivalent to distress of the privileged where supporters of a popular ideology think anything that’s equally fair to popular and unpopular ideologies, or even biased toward the popular ideology less than everyone else, is a 100%-against-them super-partisan tool of the unpopular people.
So I want to go back to Dylan Matthews’ article about EA. He is concerned that there’s too much focus on existential risk in the movement, writing:
Effective altruism is becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.
And:
EA Global was dominated by talk of existential risks, or X-risks.
And:
What was most concerning was the vehemence with which AI worriers asserted the cause’s priority over other cause areas.
And:
The movement has a very real demographic problem, which contributes to very real intellectual blinders of the kind that give rise to the AI obsession.
It sounds like he worries AI concerns are taking over the movement, that they’ve become the dominant strain, that all anybody’s interested in is AI.
Here is the latest effective altruist survey. This survey massively overestimates concern with AI risks, because only the AI risk sites did a good job publicizing the survey. Nevertheless, it still finds that of 813 effective altruists, only 77 donated to the main AI risk charity listed, the Machine Intelligence Research Institute. In comparison, 211 – almost three times as many – donated to the Against Malaria Foundation (note that not all participants donated to any cause, and some may have donated to several)
An explicit question about areas of concern tells a similar story – out of ten multiple-choice areas of concern, AI risks, x-risks, and the far future are 5th, 7th, and last respectively. The top is, once again, global poverty.
I wasn’t at the EA Summit and can’t talk about it from a position of personal knowledge. But the program suggests that out of thirty or so different events, just one was explicitly about AI, and two others were more generically x-risk related. The numbers at the other two EA summits were even less impressive. In Melbourne, there was only one item related to AI or x-risk – putting it on equal footing with the “Christianity And Effective Altruism” talk.
I do hear that the Bay Area AI event got special billing, but I think this was less because only AI is important, and more because some awesome people like Elon Musk were speaking, whereas a lot of the other panels featured people so non-famous that they even very briefly flirted with trying to involve me.
And – when people say that you should donate all of your money to AI risk and none to any other cause, they may well be thinking in terms of a world where about $50 billion is donated to global poverty yearly, and by my estimates the total budget for AI risk is less than $5 million a year. There are world-spanning NGOs like UNICEF and the World Bank working on global poverty and employing tens of thousands of people; in contrast, I bet > 10% of living AI risk researchers have been to one of Alicorn’s weekly dinner parties, and her table is only big enough for six people at a time. In this context, on the margin, “you should make your donation to AI” means “I think AI should get more than 1/10,000th of the pot”.
I suspect that “AI is dominating the effective altruist movement”, when you look at it, means “AI is given an equal place at the effective altruist table, compared to being totally marginalized everywhere else.” By figure-ground illusion, that makes it seem “dominant”.
Or consider me personally. I probably sound like some kind of huge AI partisan by this point, but I give less than a third of my donations to AI related causes, and if you ask me whether you should donate to them, I will tell you that I honestly don’t know. The only reason I keep speaking out in favor of AI risks is that when everyone else is so sure about it, my “I don’t know” suddenly becomes a far-fringe position that requires defending more than less controversial things. By figure-ground illusion, that makes me seem super-pro-AI.
In much the same way, I have gotten many complaints that the comments section of this blog leans way way way to the right, whereas the survey (WHICH I WILL ONE DAY POST, HONEST) suggests that it is almost perfectly evenly balanced. I can’t prove that the median survey-taker is also the median commenter, but I think probably people used to discussions entirely dominated by the left are seeing an illusory conservative bias in a place where both sides are finally talking equally.
Less measurably, I think I get this with my own views: – I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else. And I despair of ever shaking the label of “violently obsessively anti-social-justice guy” – despite a bunch of posts expressing cautious support for social justice causes – just because I’m not willing to give them a total free pass when they do something awful, or totally demonize their enemies, in the same way as the median person I see on Facebook.
Or at least this is how it feels from the inside. Maybe this is how everybody feels from the inside, and Ayatollah Khameini is sitting in Tehran saying “I am so confused by everything that I try to mostly maintain an intellectual neutrality in which I give Islam exactly equal time to every other religion, but everyone else is unfairly hostile to it so I concentrate on that one, and then people call me a fanatic.” It doesn’t seem likely. But I guess it’s possible.
To anyone else also baffled by the ordering of “AI risks, x-risks, and the far future” (“5th, 7th and last”), it turns out the relevant options in the survey were AI risk, x-risk (other than AI) and far future (other than x-risk).
This is very reassuring, thanks!
“This is taking over the movement” is a process, not a measurement of how much it has already taken over as of this particular moment. If it is increasing and has not leveled off, “it is taking over” may be an accurate description even if its current percentage isn’t massively high. This is especially the case if it is being promoted more than other ideas whose percentage of the movement is similar.
Furthermore, whether something counts (to non-literal speakers) as X taking over Y depends partly on the relation of X to Y, especially when X as a movement didn’t arise from Y. If 25% of doctors believed in homeopathy, many people would describe it as taking over even though 25% is nowhere near a majority. Likewise for if 25% of doctors decided that being a Republican is a medical condition.
I didn’t include this because I didn’t think it was relevant to the main thesis, but x-risk has been part of EA since the beginning (GWWC founder Toby Ord was originally an x-risk scholar at FHI) and x-risk people seeded the original core of effective altruists. I would bet that concern with x-risk and AI is declining as a percent of the movement; back in 2010 a really big chunk of the movement was LWers; now they’ve diversified beyond that base.
I wish I had good statistics on this, but there’s only one year worth of EA survey so I can’t plot a time course. All I have area my impressions – but I think they’re better than Matthews’
Toby only recently took a paid appointment at FHI (previously he was elsewhere at Oxford). But he has been spending time at the FHI and coauthoring papers with FHI folk for many years, and indeed before he started Giving What We Can.
http://arxiv.org/abs/0810.5515 (2008 paper with FHI folk on model uncertainty for low-probability risks)
https://en.wikipedia.org/wiki/Giving_What_We_Can (GWWC 2009)
And of course ur-figure Peter Singer was previously interested in extinction risk and the long-run (e.g. nuclear war and climate change, but not I think AI) for total utilitarian reasons.
http://www.nytimes.com/2005/01/02/books/review/catastrophe-apocalypse-when.html?_r=0
The origins of EA and the relationships between its different tendencies make more sense to me when I think of them as different types of “Utilitarian Fundamentalists.”*
*Not that there’s anything necessarily wrong with that.
I would then ask how EA was sold back when it was started. Was the connection between EA and AI risk emphasized or deemphasized? If it was deemphasized (even if only to make EAs seem less crazy), but it is emphasized more now, people would notice an increase in emphasis, and consider that a takeover.
I can only tell you that MIRI (then SIAI) emphasized it, and used EA reasoning in their calls for donations. That’s how I first heard of EA, years before I heard of Peter Singer.
The name ‘effective altruism’ came after the existence of Giving What We Can and GiveWell.
There has been recent changes in activity, e.g. OpenPhil donating $1MM to the FLI grant program alongside Elon Musk (while donating tens of millions to global poverty things, including $25MM to GiveDirectly last week). That seems to have been significantly driven by the FLI open letter and endorsement by more major mainstream figures (Bill Gates, Stuart Russell, Elon Musk), the response to Superintelligence, and the grant program being oriented at getting mainstream researchers addressing relevant problems from AI.
Giving What We Can continues to only recommend poverty charities, but has broadened its pledge to allow anyone donating to help people as effectively as possible to join (which has mostly led to more animal activists joining, and some people who were giving to poverty charities but wanted to retain flexibility in case of something even better coming along).
The Center for Effective Altruism came later (expanding around Giving What We Can) and has done things like incubate Animal Charity Evaluators and 80,000 Hours, and the Global Priority Project. Will MacAskill’s book covered poverty charities, animal welfare, global catastrophic risks, and other causes being investigated. The CEA is also down the hall from FHI at Oxford.
To the extent there has been a shift it would be OpenPhil getting a little bit involved in AI (among other catastrophic risks, with biosecurity coming ahead of AI in its prioritization spreadsheet), more interest around CEA in global catastrophic risks, and perhaps more attendance of people interested in AI at EA events. And the FLI open letter, OpenPhil’s involvement, and similar mainstream attention to broadly construed AI risk issues have contributed to it rising in standing or interest as a cause.
Taking the derivative of a noisy measurement (and ‘perceived support’ is surely a noisy measurement) tends to amplify the noise. So people could easily decide that things are going in the wrong direction.
I think in the case of Social Justice people are simply using different standards of measurement. Affirmative Action, for instance, increases the number of blacks and Hispanics who have good jobs and reduces the number of whites and Asians who have good jobs compared to the market outcome. Whites and Asians are still overrepresented among people who have good jobs, but blacks and Hispanics who are on paper unqualified to have good jobs become overrepresented, while whites and Asians who are on paper qualified to have those jobs become underrepresented.
Whether you consider blacks and Hispanics overrepresented therefore depends on what you think about the paper qualifications as a just standard for selection, not so much on how many blacks and Hispanics there actually are with good jobs. Basically the two groups are founding their views on different assumptions and then, instead of testing those assumptions, just talking past one another. The debate on affirmative action (and most other SJW issues) is *really* about whether various groups humans are or are not actually equal, and therefore inequality of outcome must be a result of social oppression, or whether they are not, in which case unequal outcome may well be just, or at least not a result of oppression by other people.
When it comes to AI risk, I do not see this sort of fundamental difference of worldview. The vast majority of people in EA agree that the purpose of EA is to minimise the number of preventable deaths, maximise utility, or something along those lines. Some people are arguing that AI risk is actually more important than global poverty, because AIs might kill many more people or reduce the utility of living people very much more than global poverty is currently doing. Others strongly disagree, probably in most cases because the hypothesis strikes them as unfamiliar and weird.
Now, I don’t necessarily agree with this hypothesis, but I think that if it could be conclusively proven the people who currently think AI risk is overrepresented in EA (including me) would change their minds on its importance within EA. Proving that affirmative action causes blacks and Hispanics to get jobs with worse grades than whites and Asians, or proving that blacks and Hispanics still earn less than whites and Asians with affirmative action, will not change many peoples’ minds on that issue.
Your argument about Social Justice assumes that if two groups are “Equal” we should expect them to be approximately equally represented in a given field in the absence of discrimination. I fail to see any support for this assumption. People’s/genders have different cultures.
One can look at the average household income of various ethnic groups in the USA here: https://en.wikipedia.org/wiki/List_of_ethnic_groups_in_the_United_States_by_household_income . The differences are extreme and not explained by racism. Even groups in the same “races” (say white, south asian, south-East Asian, hispanic, black, middle eastern) often differ very significantly in income.
Another simple example is that Psychology PHDs are about 70% female and molecular biology students are about 60% female. I doubt these fields discriminate against men.
Culture is a social factor, so the Social Justice argument is that culture is oppressive if it results in groups they choose to care about not achieving goals they choose to care about, like black people earning less money. It doesn’t necessarily have to be an identifiable person deliberately oppressing another.
I do agree with you that people who believe this theory seem to develop mysterious blind spots in areas where groups they choose to care about outperform the average, but I am trying to summarise the views held by others, not argue a case of my own.
“Your argument about Social Justice assumes that if two groups are “Equal” we should expect them to be approximately equally represented in a given field in the absence of discrimination. I fail to see any support for this assumption. People’s/genders have different cultures”
One notes that even if “equality” meant that the sort of traits that determine what sort of job you choose were randomly distributed across the population without regard to different cultures or any protected class — you would STILL not expect every job to resemble the general population. Indeed, if I ran across such a case, I would probably start throwing about — well — 10^-67 or similar numbers.
After all, if you were flipping a coin and it went heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails-heads-tails- — how long would it take you to think something was screwy?
In a universe where Benford’s Law is a real thing, we should be suspicious when we see a normal distribution, not when we don’t.
I’m pretty sure that it’s clear to everyone in discussions of AA that we’re sacrificing some amount of quality for diversity, the disagreement is largely over how much quality is sacrificed or ought to be.
No, I have seen people who treat qualified vs. unqualified as a binary, and argue that affirmative action doesn’t result in a reduction of quality because every affirmative action hire is “qualified”.
The other tactic I’ve heard is to set up a strawman of perfect meritocracy where you supposedly hire the people who are absolutely optimal with absolute certainly, shoot down that strawman, and offer the “everyone is qualified” as the only alternative, without acknowledging that there might be other positions.
I think a lot of people support AA because they believe underrepresentation is due to racism on the part of employers. Can you link to articles from the recent focus on white and Asian dominance of the tech industry suggesting that the tech industry should accept lower productivity in exchange for achieving social goals? Because I don’t remember any, but I seem to recall a number of articles arguing that AA in this sector would increase profitability, so those white and Asian owners are implicitly leaving money on the table in their bid to exclude blacks and Hispanics.
Barely-maybe-tangetially related: https://modelviewculture.com/pieces/the-dehumanizing-myth-of-the-meritocracy
Taken from here: https://github.com/opal/opal/issues/941
Obviously about opens source rather than industry, but I found this so baffling I felt it was worth linking to.
Yikes. That article is something.
“His point underlines a significant problem with our already flawed notion of meritocracy: within this system, the worth of the individual is measured not by their humanity, but solely by their intellectual output.”
I hope this belief is not common, that is a pretty serious category error.
Just as me being a worthless x-risk researcher has no bearing on my worth as a human. My worth as a human should have no bearing on my worth as an x-risk researcher.
I think I’ve met too many humans to want my worth to be measured by my humanity.
People who are very human and not worth much would obviously favor measuring worth in terms of humanity.
(I’m worthless *and* inhuman so I’ve got no dog in this fight)
Nornagest for comment of the month.
I hope this belief is not common, that is a pretty serious category error.
It is extremely common, in that many people believe that not being willing to employ “equally” indicates malevolent motives. Since obviously talent is normally distributed and in every OTHER case, corporations are money-generating robots with no motivation other than greed (that part is semi-fair) if they’re “leaving money on the table,” it can only be due to Pure Evil. When you point out this discontinuity, they instantly explain it with bigotry/racism.
“I think a lot of people support AA because they believe underrepresentation is due to racism on the part of employers”
Then they should be able to provide examples of it.
Instead we get statistical innumerates insisting that a disparity means discrimination.
I think a lot of people support some level of AA because regardless of what led to the disparity, the market alone hasn’t seemed to effectively address the problem, and, well, you’ve got to start building a minority middle class somewhere . . . .
Except that the decade in which the blacks showed the most economic progress was the 1950s. Instituting anti-discrimination laws produced no change. Instituting AA slowed it down; furthermore, disaggregating showed that what meant was already well off blacks continuing to gain, and poor blacks stopping.
I point out that it was not solely us. India saw the same thing with AA for untouchables, and Malayasia for “sons of the soil.”
@ Mary:
Could you link to the data for Indian dalits+OBCs and Malaysian bumiputras? I didn’t know the same effect also occurred in India as well, and would really like to know more.
well, you’ve got to start building a minority middle class somewhere . . . .
Low-income housing is also a rather high-priority thing that we’ve got to start building somewhere. Since the most important function of a house is to keep the rain off people and their stuff, I propose we start with the roof.
So… tents?
Yes, tents, and never anything better than tents.
And persistently complaining about the inequality of some people living in the tents the complainers built for them while other people are living in nice houses they built for themselves.
Malaysian Bumiputeras are especially interesting because they are a majority in Malaysia and they have all the political power, but still dramatically underperform the Chinese economically even after turning them into official second class citizens. Malaysia is really strong evidence that economic success in even vaguely market-oriented economies is not based on possessing political power or social status. It’s also a good illustration of how AA and Nuremberg laws lie on a continuous spectrum.
@ Mary
Could you link to where you read about the 50s being the best decade for black Americans economically? I’m pretty sure that isn’t true, but I’m open to being pretty wrong.
I wouldn’t be too surprised if it actually was the 1950s, if only because they would have started with little and that was a good decade for workers in general…
> I think a lot of people support AA because they believe underrepresentation is due to racism on the part of employers. Can you link to articles from the recent focus on white and Asian dominance of the tech industry suggesting that the tech industry should accept lower productivity in exchange for achieving social goals?
Your experience/dataset is very, very different from mine. One example that comes to mind was the much-discussed releasing of diversity numbers by Google, in which female employees were 49% of non-technical roles and 30-something percent of technical roles. This sounds bad until you realize that the % of technical roles just about lines up with the % of Computer Science graduates that are female (and thus no evidence to support the fact that there’s any discrimination by gender in hiring at Google).
As I recall from the conversation around the release of the report, even those articles that paid lip service to this fact had as their central point: “Google’s diversity numbers are not pretty, they have a lot of work to do to fix it”. And of course the majority of the articles I saw didn’t mention this highly significant fact at all.
There’s an argument to be made for AA as a partial corrective to credentialism. Paper credentials don’t arise in a vacuum. For people from a negative social background, paper credentials will be more difficult to accumulate because the situation persistently interferes.
If paper credentials don’t correlate perfectly with ability to do the job, and hiring managers over-rely on paper credentials, then as a consequence people from negative social backgrounds will be underrepresented relative to their ability to do the job.
I see this narrative as plausible; the cognitive and economic incentives of hiring managers (i.e. risk aversion, reliance on filtering software, lawsuit CYA, similarity heuristic) make a market failure of this sort quite believable. AA of some variety could reasonably apply as a corrective, ultimately improving productivity.
Applying AA by race seems sub-optimal for this purpose; economic or geographic categories would probably be more useful. But race is correlated enough with those that race-based AA could well be net beneficial in this context.
There was some interesting work on this done by Oxford and Cambridge Universities* a few years ago where they found that students from schools that got poorer average results got better results at university than students with the same grades from schools that got better average results.
The students from schools with poorer average results tended to be working-class and/or non-white.
The result was that they lowered their admission standards where a student attended a school with poor results overall and raised them for students attending elite schools – their objective being to admit the students who will achieve the maximum level at the end of their university career, of course.
If we suppose that talented people who received poor teaching were held back as a result and therefore ones that get good (but not great) grades are comparable in ability to ones that get better grades from better teaching, then, when put into the same environment (ie same job) they will end up delivering the same performance, the correct recruitment strategy for employers is affirmative action for people who were held back by their environment. To the extent that race is a good proxy for this, so affirmative action makes sense in pure meritocratic terms.
* for non-Brits, they are the two most prestigious universities in the country; while they are public-sector, they have the prestige and the highly-selective admission standards of the US Ivy League or MIT/CalTech. Britain doesn’t have an equivalent of the SATs, but does have nationally-standardised subject examinations, so grades are directly comparable nationwide.
FN: in this context BrE School = AmE “high school”, BrE “university” = AmE “college”.
If paper credentials don’t correlate perfectly with ability to do the job,
1) They don’t. No argument.
2) They’re still the best thing employers have to go on which they are allowed to use.
3) There is no evidence that using any other method would produce “better” results in that diversity would increase without significantly reducing the productivity of the employer. There is considerable evidence to the contrary.
the correct recruitment strategy for employers is affirmative action for people who were held back by their environment.
This assumes that the graders at both levels of educational institution were being honest, and that the reason the students at a Type 2 school got bad grades is because the teachers, while bad, were objective about grading. Since in a considerable portion of the US educational system this is not the case, the proposed heuristic will not work very well. And by “not very well” I mean “it will produce terrible results.”
@ Marc Whipple
Presumably we’re talking about grades in the standardized examinations Richard mentioned in his comment, not the grades they got in their school classes.
@ Nita:
Please c/p the phrase “in the US” from where it is in my post to after the words “will not work very well.” If it wasn’t clear I was limiting my comments to why something that might be true in the UK isn’t true here, my apologies for the failure.
Since the only standardized examinations given in the US short of graduate school are reasonable proxies for IQ tests, any attempt to use them for this purpose would soon fall afoul of Duke Power.
Not only do I think that’s incorrect, I think it’s extremely rare that you’ll find any kind of policy argument where it’s the case that both sides agree there’s a trade-off and the argument is how much of a trade-off there is and how much we should be willing to trade off. It’s much more common to deny there’s any trade-off at all happening, that there is anything negative associated with whatever it is you’re in favor of.
This is, unfortunately, very true.
Strongly agreed. It resonates with a general lament of mine – polls which ask if the person is in favor of getting X, implying that the alternative is getting nothing. If they think X is even slightly good, they’ll say yes, and the poll will report strong approval of X.
If a poll instead asked whether the person would rather have an N% higher representation of $group in white-collar professions or a K% increase in white-collar worker quality and, say, $10/year in extra personal spending money (whatever would be spent on a program implementing the former, split evenly), the poll would give a much more accurate account.
Agreed
I think I was actually trying to say something other than what I actually ended up saying, but if that’s so it doesn’t matter, because I now have no idea what my intent was, can’t remember. Maybe I was just being dumb.
There are policy conversations where if you ignore idiots the conversation can’t move forward.
The example that leaps to mind is the existence of the American Debt Ceiling, which serves no obvious purpose except nearly triggering a worldwide apocalypse every once in a while (and isn’t mandated by any other law, no fancy repeal needed). Once you establish some basic facts to frame the discussion, there isn’t much to say, but man, when people push back about stuff like that you should keep persuading them, because did you hear about that apocalypse that could happen if you don’t?
The purpose looks pretty obvious to me. This is a start: http://lesswrong.com/lw/ase/schelling_fences_on_slippery_slopes/
@hawkice,
The existence of the debt ceiling brought a refreshing movement in the direction of fiscal sanity during 2013, The Year of Responsible Government, a year which saw the Federal deficit cut by $500 billion.
Given the entirely predictable demographic tsunami facing Social Security and Medicare a few years from now, I count this as the signal achievement of the U.S. Government this century. Debt ceiling = grown-up government.
(And, of course, in the throes of default, U.S. Treasuries rallied, reaching record low yields. Some default.)
Caue, that might have been it, yes. Regardless, your idea is a good one and seems correct to me. People are failing to acknowledge the tradeoff directly, they are acting as-if they do not believe in a tradeoff, but perhaps somewhere in the back of their minds they do acknowledge that all things have costs, and it’s just that within their perceptions the costs are so low they don’t deserve any conscious attention at all, they don’t want to make a rhetorical concession that might cause themselves or their cause to lose face.
Therefore, eg, “theoretically”, white people can suffer from racism. I think most of even the highly radical SJ people do believe this somewhere within their minds, but mentioning such possibilities is viewed as offensive and counterproductive by the group norms, so it doesn’t really happen. Evil lurks between individuals, rather than solely within their hearts and minds.
If the debt ceiling is entirely frivolous, or worse, down-right apocalyptic, why do I pay taxes?
@Randy M, When the government issues its own currency, taxes are essentially a mechanism for partially balancing government spending so the money supply doesn’t grow out of control and produce massive inflation. The way our government creates and spends money to maintain a slowly growing money supply is absurdly convoluted, but part of it is just creating and spending money, and taxes are there because creating too much money causes problems, and the formal system of issuing debt is there for a bunch of other reasons (probably partly including creating a convoluted enough structure that people don’t realize that just creating and spending money really is part of what’s going on).
@Protagoras,
You write:
“taxes are there because creating too much money causes problems”. This sounds wrong to me.
Taxation, and budgeting generally, is about discipline, trade-offs, paying your way, intergenerational equity. Young people in general should listen up to that last bit.
Monetary policy is not an elixir that nullifies the need to make choices. At best, it’s a way for an economy to avoid shooting itself in the foot (see 1929-33 and 2008 for examples of self-inflicted wounds.)
That sounds strange. AFAIUI, taxation is the protection fees that the subjects pay the rulers in order to be allowed to live and work in the rulers’ lands.
Did you mean taxation exclusively in context of budgeting and sustainability?
@AngryDrake,
I meant taxation in the context of a republic of adults. A republic that, for example, imposed an income tax on itself via Constitutional Amendment in 1913.
I’m definitely not thinking in the context of the Sans-cullotes.
The government isn’t just a really large household, taxes aren’t just a really big salary, and social security isn’t just a really expensive pickup truck, and treasury bonds aren’t just a credit card with a really big spending limit. There are serious differences of kind, not just scale.
I’m sorry if that messes with your little morality tale.
So, it seems like A, there is an upper limit to what the government should add to the monetary supply, so some debt limit is not necessarily wrong, and B, the system is so intentionally complex that someone not geting the details is not a sgn that they are an idiot.
The problem with the debt limit isn’t that it limits the total amount of debt, that’s neither here nor there. It is that it is a stupid redundant gesture.
Congress is the very organization that sets the rules for taxing and spending, which determine the debt level.
It’s as if you had to sit through a board meeting where you first voted on the CEO’s salary, the CFO’s salary, the CTO’s salary, etc. and then at the end of the meeting after all the numbers were set you had to listen to three more hours of debate on a vote for total CxO spending and if didn’t pass well then all of them would be fired. It’s just silly.
Edit: Agreed that not understanding all the details doesn’t make one an idiot. I certainly don’t. I must, however, admit a little apathy for people who don’t understand the details but hold an irrebuttable presumption it’s really simple and they totally get it.
@brad, ah yes “governments aren’t households” is the gateway to magical thinking in macroeconomics.
The idea that people shouldn’t use this flim-flam to screw the next generation is, I suppose, a kind of moralizing.
But this isn’t just a far flung prediction that depends on a lot of complicated economic variables. The chickens are coming home to roost, and right soon. Simple demographics. The CBO knows it and tells anyone willing to listen.
We have for the most part squandered the “low dependency load” that comes with a concentration of baby boomers in peak earning years over the past 15 years. Yup, the demographic fat years are in the rear view mirror now. Harvest time is approaching, and all the shiny new ideas for spending government money will be crowded out over the next decade by entitlement spending and debt service. It’s gonna be a crummy time for fans of government spending. Don’t say you weren’t warned.
“I must, however, admit a little apathy for people who don’t understand the details but hold an irrebuttable presumption it’s really simple and they totally get it.”
More like, “the reliable and repeatedly proven principles that govern an analagous situation will be applied unless someone can cogently explain why they shouldn’t be.”
Then the mockery and hyperbole entrench the notion that the explanations are the flim-flam of a confidence man.
There are dozens of places around the country where you can go and someone will cogently explain it to you (for a fee). Or if you prefer you can buy a few books and learn it on your own.
Do you think the microscopic word must be governed by Newtonian mechanics (those reliable and repeatedly proven principles) because no one has sought you out to explain QM to your satisfaction?
Brad, the government really is like a really big household, just a household with a printing press in the basement. The only real difference between the government and any other organization (including households) is that the government doesn’t let anyone else run a printing press
In other words, inflation is a tax on cash balances and seniorage is the profits of a government monopoly. These facts do not imply that budget deficits do not matter. Not by a long shot. They simply imply that a government that has its own currency can substitute one kind of tax for another.
EDIT: HTML, right. Whoops.
I don’t think that is agreed upon by people who support AA. The arguments against it are that:
1: AA attempts to correct for implicit and structural bias against under represented minorities.
2: An increase in diversity _is_ an increase in quality. Not true in general, but it is true in some fields such as medicine. Black doctors are more likely to go on to practice in majority black communities for example, and patients prefer physicians of the same race. Its very hard to get objective data on whether physicians provide better care to patients of the same race, but I think its quite plausible especially in the case of primary care in which learning about your patients lifestyle and culture is a big part of your practice. (probably less true of neurosurgery which strictly about the science). But the fact that it increases patient satisfaction is suggestive.
Isn’t the claim: ” A group of 10 white guys is inferior to an ethnically mixed group of 10 people because their ethnicity mskes them diverse.” Imply that there are in fact racially-based differences between people?
When I was growing up, I could have sworn ‘anti-racism’ meant arguing that race was irrelavent and everyone was the same. Now it seems to argue that ‘[minority races] are all different and unique, but only in good ways, and you must recognize and agree with this or else.”
I just don’t see how you can argue against racism by stipulated point-blank that races are fundamentally different.
Why not respond to my specific points instead of rounding me off to stupid things I never said? I never said that the races were fundamentally (which I assume is code for genetically) different. And I never said they aren’t. I did imply that they have different lifestyles on average and I can’t imagine anyone would disagree.
I never said that an ethnically mixed group of 10 people is “superior” to 10 white guys. I did claim that right now on the margin, increasing the amount of diversity in some fields (such as primary care in medicine) increases quality. Not because black doctors make fundamentally better primary care doctors, but because majority black communities have fewer primary care doctors; black primary care doctors are more likely to practice there; and the community is more likely to want them as physicians.
@ Alexander Stanislaw
You said that “An increase in diversity _is_ an increase in quality.” but this can only be true if there are in fact fundamental differences.
Otherwise there would be no qualitative difference between a group of 10 white Anglo-Saxon protestants and any other randomly selected group of people diverse or otherwise.
Hence Null H’s reply.
Alexander, I found your argument entirely comprehensible, and am baffled by the interpretation of it these two are providing. Just so you know that you’re not “the crazy one” here.
I’m afraid you’re crazy with him. If you really think that an increase in diversity is an increase in quality, would you want your X-rays at the hospital checked by the best radiologists they’ve got, or by ones of different race regardless of skill?
@hlynkacg
I think I’ve stated pretty precisely why diversity can be a desirable thing. And perhaps I’m reading too much into the word “fundamentally”. But to round this off to
is bullshit, and I suspect that Null has no interest in addressing my substantive points (though I’d love to be pleasantly surprised).
To attribute a fake quote to me:
is also bullshit.
At this point, I’ve badly failed Leah Libresco’s internet communications philosophy, and given up much of the chance for productive discussion (partially out of lack of faith). Which is unfortunate, because I do think AA is an interesting topic that is particularly ripe for high feelings and lack of precision. And as a result it usually gets sloppy treatment.
@Mary
I’ve very clearly stated the areas of medicine where diversity is valuable (primary care*). So I’d prefer not be straw manned.
*For the very easy to understand reason that quality is patient dependent. A doctor who I can actually access because of location is better for me than a doctor I cannot. A doctor who I can communicate more easily with because we come from a similar cultural background is better for me.
Wow you think that some militaries should use both tanks and airplanes, what a moron, that’s implicitly conceding there are value-relevant differences between tanks and airplanes which means you should obviously commit to just one of the other. Jeez you dumb air-land battlist, either it’s optimal or it’s not, this doesn’t have to be hard, just set aside those dumb biases.
@27chaos
Thank you, I didn’t think that my points were that opaque. But I don’t think parody comments are productive (not that I’ve kept this as productive as I could have).
@27chaos
Are you suggesting that there are qualitative differences between aircraft and tanks?
Cause i’m pretty sure that we established that they were equal.
What the fuck is even going on here?
@Alexander I just reported my own comment, I think perhaps a week long ban would be a good thing for me. #YOLO
@Alexander Stanislaw
And I think I’ve stated precisely why those claims are dependent on there being fundamental differences between races.
Which, as I noted, is rather counter-intuitive for someone who is supposedly trying to fight racism.
I suspect that ‘fundamental’ may be doing some work here that needs to be examined. It is conceivable for there to be contingent cultural differences between different demographics, which got to be there through factors that do not have any significant genetic/HBD causes, which mean that when a team contains people from several different demographics, it has overall more desirable qualities than a team drawn from just one demographic.
I don’t know how often that actually happens, but it’s not hard to imagine different cultures, through random drift, coming to prioritise different skills, such that when you put a group of people drawn from those different cultures in a room together, they complement each other.
Let me sum up this conversation thread so that people are not lost
Alexander Stanislaw:
“In some cases, diversity is good. For example, in primary care doctors, we should hire more primary care doctors who listen to Slayer. This is because they are more likely to be doctors in areas where the majority of people listen to Slayer, and also because patient satisfaction improves when both doctor and patient relate by talking about their favorite Slayer albums”
HlynkaCG:
“This proves that people who listen to Slayer are fundamentally different from people who don’t listen to Slayer”
What Winter Shaker said.
@HlynkaCG
I don’t know what “supposedly trying to fight racism” means and I don’t know what you mean by “fundamental”*. My points are as follows, if you disagree with one, then I’d be interested to hear why. I won’t respond unless you say whether you agree with 1, 2.1, 2.2 and 2.3 and why.
1)The black community is particularly in need of better primary care.
2) Having more black doctors will help with this.
2.1) Black doctors are more likely to practice in areas that the black community can access.
2.2) Black patients tend to trust and be better satisfied by care from black physicians.
2.3) Black physicians are more likely to share a common culture with black patients, enabling better communication.
*Remove the fundamental, and it becomes trivially true that people of different races have different life circumstances on average. Why is this a startling revelation?
@J. Quinton
I’m not sure what the substitution of “people who listen to Slayer” was for. Perhaps I should have started out by making point 1 explicit: that the black community has much less access to and utilization of health care at the present. The substitution makes no sense given that this is the problem that AA is trying to make a dent in.
I didn’t think that most people were unaware of this, but here is the relevant data.
@Alexander Stanislaw
>I won’t respond unless you say whether you agree with 1, 2.1, 2.2 and 2.3 and why.
1) Agree.
2) Disagree. as you seem to be assuming facts not in evidence.
2.1) Why? Say you start giving bright young students from “bad” neighborhoods full ride scholarships to med school and all the opportunities that entails. How many of them do you think will stay in trier old neighborhoods and how many will expect to use their new-found wealth and status to move to a nicer one? My own experience strongly suggests that most will take the latter over the former.
2.2 & 2.3) You seem to assuming that skin color is a far more prominent concern and potential obstacle to both care and trust than things like socio-economic background, shared cultural/religious values, or poverty in general.
Aside from skin color what exactly would a black Harvard grad from an upper middle class family, have in common with a former gang-banger, or a recent Sudanese immigrant?
Excellent! Thank you for responding to my points.
2.1) The simplest argument in favor of this is that it is empirically true. In point of fact URM physicians tend to practice more in areas where there is a lack of access to healthcare. And this shouldn’t be terribly surprising given how race is spread out across the US. An earlier survey showed similar results.
2.2) This is also simply empirically true. Point 2.3 deals with whether patients actually receive different care from a same-race physicians. But this point, that patients _perceive_ different care is well established. More here. Regarding socioeconomic background, who said I was opposed to creating incentives for those less represented along those lines? There are already incentives for rural physicians, which should also address shortage of care problems in rural areas. As for religion, that is a legal minefield, unenforceable, and is somewhat already addressed by other incentives (minorities are more likely to be be of a minority religion).
2.3) This is the most difficult point since it rests on anecdotes and plausibility. Since points 2.1 and 2.2 have more empirical backing I’ll drop this for now.
@Alexander Stanislaw
…and I remain unconvinced that these variations are really the result of skin color and not simple social or cultural confounders such as “speaking Spanish”.
As someone else said further down the thread. If you want to select for diverse social, economic, and cultural backgrounds select for them. Don’t use a proxy, especially not a proxy that’s as loaded with as much baggage as skin color.
It seems that you agree that 2) goes some way towards solving 1) though you think there is a better way to do it? If such a way exists then I would support it. By all means create incentives for Spanish speaking doctors as well. Add in all of the incentives you want if they are backed up by empirical data (or some other good reasons to think they will work). I think race has passed the test with very strong effect sizes, and the existence of other valid criteria does not diminish the fact that recruiting URMs in medicine has and will continue to benefit people living in areas with shortages to care.
Is it a proxy? Of course, you can’t measure what someone is going to do in the future. You don’t have access to the real variable you want to measure. All you can do is use predictors. And race is a fine one to use as the data bears out.
@Alexander Stanislaw
This part…
“I think race has passed the test with very strong effect sizes”
…is the part I disagree with.
I think that it is safe to say that being Latino is strongly correlated with speaking Spanish and being raised Catholic. Just as growing up Black in the Deep South is strongly correlated with being raised Methodist, and all sorts of other cultural touchstones such as food, music ect…
So when set out to prove that Latinos get better care from other Latinos and Blacks get better care from other Blacks what have really demonstrated? The importance of skin color or of shared cultural values?
The proponents of AA would have us believe that skin color and genealogy trump socio-economic status and other cultural concerns. Ironically enough, this is the exact reasoning that was used to justify things like segregation and Jim Crow to start with.
Beware the illusion of transparency. Maybe that is the secret goal in the back of everyone’s heads that they refuse to talk about. But as evidence against that, I put forward the fact that employers making such a tradeoff is illegal.
“I’m pretty sure that it’s clear to everyone in discussions of AA that we’re sacrificing some amount of quality for diversity …”
I was struck by the way you put this. It makes sense if “diversity” is actually code for “blacks being better off” or something similar. But taken literally, it makes very little sense—diversity isn’t a highly important goal on its own, and if it were, we would be doing things like subsidizing African, Indian, and Korean students to come to American universities, not selectively admitting American blacks, who are much more like American whites than the foreign students would be.
When people talk about diversity as a goal, they mean (generally) that certain types of institutions should be representative of society in certain ways, not that we should minimize the Simpson index. So if we had a lot of Koreans in the United States, we should have Koreans in universities, but we don’t need to import Koreans for our universities. You may disagree with this, but it makes sense.
“but the lefties are almost all very moderate and a substantial fraction of the righties are really far to the right.”
The problem with such a statement is that it assumes a clear definition of “moderate,” which is one of the things people with different ideological views disagree about.
To take an example from a recent thread, a number of people seemed to agree with a statement along the lines of “neither communism or U.S. style capitalism works very well.” My guess is that at least some of them considered their views moderate–after all, they were being negative about both alternatives.
From my standpoint, they were revealing either a startling ignorance of the historical facts or a commitment to left wing ideology verging on lunacy. Communism’s “not working very well” consisted of several of the most murderous regimes in history and economic policies that kept well over a billion people poor for decades. Capitalism’s “not working very well” consisted of the greatest increase in material welfare, for poor as well as rich, that we have evidence of—while failing to produce the best outcomes that anyone could imagine.
One could, of course, reverse the point. From the standpoint of some of them, my views may well appear extreme, perhaps insanely, right wing (if that includes libertarianism).
One solution is to define “moderate” and “extreme” not in terms of congruence to truth but of position in some existing distribution of views. But most of us will have our perception of that distribution badly distorted by the particular bubbles we are living in.
I was thinking about what “moderate” means in terms of the diversity of opinions of the world (not proximity to truth), and realized that a “moderate” in the US is probably very right wing by current world standards (though probably extremely left wing on social issues by historical standards).
I thought of this when a progressive told me that, during the Bush (Jr.) years, he was so depressed about the political reality in this country that he very seriously considered moving to Europe (I also recall many people claiming they’d move to Canada if Bush was reelected, though I don’t know anyone who actually did).
It struck me then that, if one finds the US to be too right wing, then one can move to almost anywhere else in the world. If one finds the US to be too left-wing, then you can move to… I don’t know, maybe Singapore?
On the one hand, this seems to support the claims of leftists that the median US voter is very right wing, but it also seems to refute the notion that “neoliberalism” is taking over the world.
@onyomi: “Right-wing” and “left-wing” mean somewhat different things in different countries. For instance, while the UK is probably more “left-wing” than the US (based on various fuzzy descriptors), the US has no analogous party to UKIP. Iran is extremely “right-wing” compared to the US on most measures, but well to our left in economic terms.
Then there’s the whole issue of freedom of speech: the US is fanatically pro- by the standards of literally the entire rest of the world, and most of them see their opposition in left-wing terms, but freedom of speech has traditionally been a left-wing value in the US. (This has been changing in recent years, especially after the Citizens United decision.)
>If one finds the US to be too left-wing, then you can move to…
Dubai?
EDIT: More seriously, though, Chile.
@stillnotking
The tea party comes to mind. This shows a bit of the ideological portion. The second chart of Wide Partisan Divide in Overall Views of Immigrants’ Impact on the U.S. seems relevant
It’s not a perfect analogy because we don’t have a parliamentary system, so it is very difficult for single-issue parties to become established.
@stillnotking:
“the US has no analogous party to UKIP”
To signal boost PSJ, the US pretty much can’t have any sort of long lasting “fringe” parties that draw much in the way of noticeable support from the voting public. First past the post voting means that any serious candidates for this mostly put their efforts into advocating for changes within one of the two parties.
If I want a more activist, left bent to Congress, my best bet is to try and get someone who is inline with my views to win the primary of one of the two major parties (nowadays, that means Dems. At other times, it could have been either party).
Same thing for or more reactionary, or populist, or nativist or any other approach. Pick the party closest to this currently and try and move them. Don’t form a third party.
In a parliamentary system, you form coalitions after the election. In a first past the post, you form them before the election.
Err… you’re aware that the UK Parliament also has first-past-the-post, right? In exactly the same way that the US House of Representatives does.
And indeed, UKIP got lots of votes, but only 1 MP. FPTP cannot be the reason that the UK has so many entrenched “waste-your-vote” parties drawing large numbers of votes election after election (Lib Dems, UKIP, Greens, etc) while the US has none. I think the reason is the much greater permeability of US political parties to outside influence, and the much greater difficulty of getting on the ballot in the US.
@Salem:
I did not, in fact, know that. I am humbled.
They do have 650 members in the House of Commons on 65M population, which I’m sure has an effect. This as opposed to 435 members on 320M population in the US House. And only 100 members in the Senate.
But, the UK head of government (head of the executive branch) is not first past the post, as it is elected by the parliament. I guess, technically, the electoral college could lead to the same type of horse trading that a parliament does, but in practice the US has a first past the post system determined on Election Day. I think that puts enormous pressure on pre-election coalition building.
And the representation of every small party other than the (very recent phenomenon, special case, see the creation of the Scottish Parliament in 1999) SNP in the House of Commons is essentially non-existent. Only in extra-ordinarily close elections would those other parties make the difference. Compare this to the few independents in the US senate (Sanders and King). Control of the Senate has, in recent memory, come down to how the independents would caucus, so I think the situations are still analogous.
Diversity is somewhere in between an instrumental and absolute value. Some people value it in both ways, some neither, others one but not the other. Sort of like “freedom”.
As an absolute value, I like it fairly well. Repetitiveness is boring, I prefer some amount of change. Chaos is scary, but diversity is a much more constrained form of variation, so that’s great.
As an instrumental value, I think it is less useful, but I understand why some people disagree. It is an easy to apply heuristic – this is actually important and valuable btw, not a tongue in cheek insult. It makes sense: insofar as differences are nonexistent diversity is the expected result and the absence of diversity suggests bad/unnecessary filters. Insofar as differences are extant, diversity helps to balance those differences out. Wisdom of crowds is kinda neat. Analyzing the same data twice provides little additional justification. That sort of thing.
In an aesthetic instrumental sense, the same sort that one might use for evaluating ideas in math, I like diversity a lot. It just doesn’t work out that well in practice, a lot of the time. Communication costs go up. Lots of important things correlate in ways that are foolish to ignore completely. Tokenism is toxic.
The core problem, I think, is that diversity is only (or mostly) valuable when it arises naturally, and forcing diversity in outcomes on top of an already broken underlying mechanism is very different than fostering healthy diversity by repairing and strengthening the system. Observing diversity is like seeing a vibrant color in nature. It means that either something fruitful is growing happily, or something dangerous is going to try to kill you.
Wow, in retrospect that last bit sounds very racist. APOLOGIES. I dislike racism because it is evil and inaccurate and out of fashion.
Seriously, I should have left it at just a mention of the word danger. I think then the analogy would have lacked the scary undertones, fitting my intent better. “or something dangerously wrong may be present in one’s environment”, perhaps. I’m trying to show that in a lot of ways diversity is good, not trying to subtly undermine it. I do not endorse the last phrase of that metaphor, I explicitly deny that it is of truth or beauty.
My bad, feel free to shame me for reckless stupidity or for making the comments a safer place for Stormfront, I rather deserve it here. If Scott wants to edit the above comment and then delete this comment of mine, that would be pretty cool. I would avoid the shame, and the risk of accidentally sheltering racists would go down.
I don’t mean to be a dick, but your follow-up sounds almost scared. For me, it’s insulting that diversity could be more than an instrumental value: you mean that people should be worth more than their instrumental value, based solely on their ethnic/cultural background? Are you smoking crack? If I moved to America, should I be working with diversity hires, or should I be working with the best that can be hired for our price?
Anyway, although you likely don’t want my support, you have it. Diversity can either mean a fostering of an environment that actively discriminates against the status quo, or a fostering of an environment in which success matters beyond accidents of birth, and it’s up to you to choose between the two.
“I don’t mean to be a dick”
When you feel the need to write this disclaimer, re-write until you don’t. Because otherwise, at some level, you probably do “mean to be a dick”.
Honestly, I really am not sure why 27chaos felt the need to write the response. If I had to guess it was because, if you squint, it seems to assign a high probability to the statement “[minorities] are trying to kill you.” I don’t think that is what he meant, though.
@ HeelBearCub
Sorry, but your comment lacks a trigger warning: “SJW-style thinking ahead, with claim of telepathy and/or Typical Mind Fallacy and/or call for inefficient use of time and/or (either dishonesty or acceptance of non-consensual psycho-therapy).”
When you feel the need to write this disclaimer, re-write until you don’t. Because otherwise, at some level, you probably do “mean to be a dick”.
ETA – My disclaimer: I’m making a meta remark which has no relation who is on which side in the current sub-thread and merits/does not merit whose support.
@houseboatonstyx:
I don’t think it requires telepathy to read the framing “I don’t mean to be [x], but” and infer that the person is about to say something that they feel can be taken as [x].
Given that, it’s usually a fairly simple proposition to say something with the same meaning, but does not infer [x]. If this was tried, but failed, the framing is usually “I’m trying to come up with a better way of saying this and failing.”
Is any of that unreasonable?
Is any of that unreasonable?
Yes.* Of course I’m not saying that “at some level, you probably do “mean to be unreasonable”. 😉
* both this and the comment I said lacked a trigger warning.
Now I’m off to more efficient use of time….
I don’t think I’ve ever seen a supporter of affirmative action admit that their policy will result in lower quality employees/students/etc.
One of the nice things about claiming that metrics are inherently biased and therefore your non-metrics-based policy preferences should be instituted is that even if your policy preferences result in lower performance, you’ve already established that metrics are inherently biased.
“we’re sacrificing some amount of quality for diversity,”
Exactly how valuable is a diversity of melanin concentration?
>Exactly how valuable is a diversity of melanin concentration?
That’s the $64,000 question isn’t it?
To elaborate, to me it seems like the proponents of AA are arguing that melanin concentration, gender, sexuality, etc… should be valued over whether someone is competent or shares similar values which strikes me as the exact opposite of what Dr King advocated when he said that people should be judged by the content of their character, rather than the color of their skin.
The alternative position that I hear from advocates of diversity is that diversity is necessarily a property of a group, and not of an individual – an individual cannot be diverse.
A diverse group will be less likely to succumb to group-think, and is likely to have a variety of relevant experiences, where a non-diverse group is more likely to have had the same experiences once each.
To pick a non-controversial example, if you’re a team designing a mobile phone, then having at least one left-handed member will make it much less likely that you put a button in a spot inconvenient for left-handers. Sure, that would come out in usability testing later on, but it would still require a (not free) redesign to fix it.
To pick a more controversial example, face-recognition algorithms are often initially trained on the faces of the programming team (and their families). This probably explains why they do a much better job with people of European descent than with people of African descent.
In addition, if the role of the group is to relate to the general public then having someone who shares a characteristic that the general public thinks is relevant can be useful because people with that characteristic are likely to relate better to someone with the shared characteristic. In this context, perception is reality.
@Richard Gadsden
That is a fair critique.
But at the same time, in my experience at least, it is not the sort of critique that is typically offered.
“A diverse group will be less likely to succumb to group-think, and is likely to have a variety of relevant experiences, where a non-diverse group is more likely to have had the same experiences once each.”
The amazing properties of melanin! It affects your very thoughts!
What bosh. If you want people with a variety of experiences, you should look for people with a variety of experiences and not use such a loaded proxy.
As for groupthink, it is notorious that those groups particularly concerned with diversity also insist on groupthink with a passion that would do any totalitarian country proud.
I notice the example you offer is of a test case, not an actual group of people. There’s no need for any of the subjects to have ever met each other.
I am an academic, so see AA/Diversity mostly in that context. The evidence that it’s fraudulent is that the individuals and institutions that support racial diversity mostly oppose the form of diversity actually relevant to their enterprise—intellectual diversity.
Consider the following gedankenexperiment. A university department has two candidates for a faculty position, about equally qualified. The appointment committee then discovers that one of them is an articulate supporter of South African apartheid. Does the probability that he will be appointed go up or down?
If the objective is intellectual diversity, it should go up—it’s a position that almost none of us get exposed to, so exposure to intelligent arguments for it improves our understanding of the world. Judged by my observation of the academic world, it goes down, probably to zero in many departments of many schools. In those schools where it doesn’t go down, replace apartheid with Stalinist central planning to get the same result.
As best I can tell, “diversity” in that context is simply a way of being in favor of affirmative action while evading the obvious argument against it.
The failure of universities to promote actual diversity of thought among their faculty is probably their greatest failing in terms of their own stated goals.
Totally with David on this. The Bush administration was about the ultimate expression of this type of thing, a Benetton commercial of different looking people with utterly identical beliefs and subservience to the ideal of being mindless yes men.
On the other hand, I think it’s a fairly worthy thing for, say, medical schools to preferentially recruit rural people because of how underserved rural populations are compared to city hospitals being packed to the brim with fresh grads.
The search for truth involves both exploring many paths and efficiently pruning paths that seem not to be working very well. That a biology department contains no creationists is not evidence that their claim to value diversity is “fraudulent.” The value of diversity is that it lets you explore a larger space, but resources are finite, so prioritization is important.
There’s a lot of equivocation over that term “diversity”. You’d think that a group with one Korean guy, one Polish guy, one Russian guy, one American white guy, a Chinese guy, and a Japanese guy would be considered diverse overall (though not in gender), but you’d be wrong.
A group consisting of 6 black women from Uniondale, NY, on the other hand, is “diverse”.
When the diversity advocates talk about the benefits of diversity, they want you to think about the former, diversity as opposed to homogeneity. When they actually suggest measures to take, they mean the latter, where members of particular groups are counted as diverse as a result of being “underrepresented minorities”
“A group consisting of 6 black women from Uniondale, NY” does not accurately describe what any university department or medical school that brags about diversity actually looks like.
It’s not clear to me. Given e.g. Implicit Association results and stuff like the studies on sending out resumes with identifiably ethnic names vs. white-sounding names, it seems plausible that candidates that are exactly as qualified as majority candidates are getting passed over, and that affirmative action simply allows those qualified people to get jobs in line with their skill and merit.
The resume with names was a lot more mixed than you are representing. The name that polled the absolute worst was a white male name — Geoff. Emily was cited as an example of a white name in the very title, but did worse than four black female names.
Careful eyeballing of them seems to lead to the conclusion that weird names are not attractive.
OTOH, do not overestimate the effect. One sociologist found data that listed the zip code where children were born, with the zip code for their mother’s birthplace, and did the grueling work of determining social climbing thereby. The poor children with lower-class names climbed as much as those with upper-class ones.
> blacks and Hispanics who are on paper unqualified to have good jobs become overrepresented, while whites and Asians who are on paper qualified to have those jobs become underrepresented.
If your predictor is noisy and you have the same cutoff for both populations, this is not true (that is, even assuming lower performance in one group which is correctly captured by the noisy predictor). Hired whites and Asians will start out overrepresented (as a ratio to # of qualified whites and Asians) compared to blacks and Hispanics, and you have to lower the second on-paper cutoff (for blacks and Hispanics) quite a bit to get equity in “fraction of qualified people who are hired.”
I do not understand why noise in the predictor would necessarily lead to whites and Asians being overrepresented. Shouldn’t it just as often result in them being underrepresented?
It’s another generalized tails come apart thing, basically. You’re closer to the tail of the black/Hispanic distribution, so “predictor” and “peformance” come apart more, i.e. a greater proportion of the “high-performing” people will not be those with highest on-paper qualifications (predictor).
Ok, I was not assuming that you necessarily believed that there was any difference in the mean aptitude between the groups. I think rejecting that there is such a difference is the modal position of AA supporters, which weakens the relevance of this argument, but it is still interesting.
You wrote:
“If your predictor is noisy and you have the same cutoff for both populations, this is not true (that is, even assuming lower performance in one group which is correctly captured by the noisy predictor).”
Did you mean, rather,
“If your predictor is noisy and you have the same cutoff for both populations, this is not true (that is, IF AND ONLY IF assuming lower performance in one group which is correctly captured by the noisy predictor).”?
” I think rejecting that there is such a difference is the modal position of AA supporters, which weakens the relevance of this argument, but it is still interesting.”
I think that AA supporters would generally say there isn’t a difference in potential. Broadly, I think SJ recognizes that poverty has an impact on development, so (implicitly at the very least) recognizes that if a group is stuck in poverty, the actual distributions will have different means.
To take this rather far afield, the NFL (and professional sporting leagues in general) had a representation problem in coaching positions. Given that there weren’t black position coaches or coordinators, the lack of head coaches wasn’t actually surprising (but it didn’t mean there wasn’t a problem).
Then they put in a rule that required that every coaching position hiring process must include at least an interview with a black candidate. This resulted in some silliness for a while, and Denny Green got some interviews for jobs he was never really considered for, but in the end if accomplished its goal, by stocking the coaching pipeline and putting black coaches in place to have a chance at becoming coordinators and then head coaches.
The potential for black people who knew football to be head coaches was there, but the mean actual NFL head coaching ability was sorely skewed before the AA measures were taken.
@Oliver
I don’t think I understand what you’re saying. If there is a mean on-paper difference, but it is not the actual mean aptitude difference (e.g. if the on-paper aptitude metric is biased, whether or not there is a mean difference at all), you can obviously still have inequity in the respective fractions of qualified candidates who are hired. This is what usually first occurs to people. Hence “even assuming” an unbiased predictor.
In mathematical logic, there are entities known as “proper classes” that cannot belong to sets. Maybe we can assume that human beings cannot belong to sets. In that case, the claim “Race A” is oppressing “Race B” becomes meaningless.
There is no logical precedent for introducing a kind of individual that cannot be a member of classes, and introducing one seems to run contrary to the whole point of the theory of sets and classes. Trying to fix social problems by messing with logic definitely seems to be attacking the problem at the wrong level (the motivation for the existence of proper classes is a logical problem, which is why it makes sense to solve it by adjusting logic).
Do you also object to counting people? On many popular interpretations of mathematics, that involves sets, so how will you make sure you have enough chairs for everyone if you can’t determine how many people you have?
I’d say it’s not the elements constituting sets that’s the problematic step, rather the next one: extending the definition of a property/relation from an element to a set. For a toy example, height is a property of a person and though nobody bats an eyelid when I say men are taller than women, the concept of height has been extended (in this case by aggregation) to relate sets of people – a very different object to a person.
Bringing it back to relevant to SJW-stuff, I think there’s a lot of trouble regarding back-and-forth between elements (usually people) and the sets we consider them as constitutive of. Even if you can sensibly define oppression of a set of people (say white men) against another set (someone else), in order to condemn me as a white man you need a way again to move from the guilt of the set, to the guilt of the constituent elements thereof – two totally different objects.
This reminds me of that image macro of the homeless guy, captioned “at least I have my white privilege”. A lot of the things the set of white people are enjoying didn’t make it as far as him.
As Scott Adams is fond of saying when he is told that “men rule the world:”
“Yes, but those are other men.”
(Also cue crude but illustrative story about how premiums for purchasing gasoline would change if men actually ruled the world as a group.)
If Blues get kicked in the ass once a week and Greens don’t get kicked in the ass, and then you pass a law that says Blues get kicked once a week and Greens get kicked once a month, you have not reduced the oppression going on.
But maybe you’ve built a bigger coalition that’s motivated to abolish ass-kicking entirely.
Maybe. Maybe you’ve instead built a coalition that’s motivated to punish Greens, or to promote ass-kicking for everyone as an honest signal of pain tolerance.
You neglect the stigma of AA, which accrues even to those who did not benefit from it. Indeed, I have heard, many times, that “white privilege” includes not suffering it.
(Logically since those who say that also deny there is any such thing as black privilege, it must mean that AA is not a benefit, since that would be a privilege.)
That stigma can get internalized. I once saw a black woman, professionally dressed, driving a Mercedes, with the license plate “QUALFYD”.
You also have an interesting phenomenon where those who achieve success outside of such means get tarred as being “uncle toms” or not really memebers of ____ minority.
Well, of course. How nasty of them to strip away the comforting illusion that failure is a result of their being meanies.
I remember the tale of a college student who was told that a certain racist professor never gave a black over a C-, but he needed the course. So he went in and worked very hard, and pulled B+. The social atmosphere about him grew frigid, because no black wanted to admit, “He gave me a C- because I was lazy in a difficult course.”
@ Mary
I think this happens to an extent in certain physics courses in my university. Often engineers will suggest that the reason they did poorly in the physics classes was because the physics professors dislike engineers, as opposed to the engineers in question being lazy. I say this as an engineer. Seems to be a broad phenomenon.
@Mary:
“I remember the tale of a college student who was told that a certain racist professor never gave a black over a C-”
That kind of tale needs a citation or it’s just mental “stimulation”. Even to the extent it happened once, it would still just be anecdote.
The credulous repeating of tales of “those lazy blacks” on this, an arguably rationalist blog, without challenge from the right is one of the kinds of things that leads those on the left here to conclusions about which way the overall commentariat leans.
@ Mary
Vivid example, true or not. There’s the same dynamic in high school when scoring 100 on a difficult exam gets your classmates mad at you for “breaking the curve”.
ETA: I forgot to kick the shins of the whole “work hard”/”lazy” assumption that’s turning up here. Which regardless of demographic, imo is false and toxic. As thoroughly discussed in a previous entry I’m too lazy to look up.
HealBearClub:
I agree that the story, absent support, should be viewed with suspicion–for two reasons. First it’s a good story, and good stories can easily survive while morphing from guesses or hypotheticals to facts. Second, it’s a story that some people would like to believe.
But if it is true, one case would be interesting—not so much for the “false accusation of racism refuted” point as for the “why evidence of such incidents is likely to get suppressed” point.
@HeelBearCub
What sort of challenge are you expecting exactly?
My last job involved hiring lots of math oriented analysts and I did look at grades, but I found that in banking, I could find more qualified minorities and women than white men because there was intense pressure to hire referrals from relationship managers, the HR director and the CEO… which were all white males. My group had top analyst nine years out of ten and top production seven out of ten and most promotions.
Computer science might have grade inefficiency, but finance you can get better qualified applicants with diversity. My top analysts were African male, Hmong female, Native American female, white male, white female, white female. All promoted. Jackie Robinson was awesome.
This matches my experience on the other side of the hiring process. While I am not going into finance myself, my school has a very strong tradition of legacy where an established set of richer, white students have easier access to the Big Four.
(By established group I mean literally have an affiliation label which grants easier access to management. To be fair, they are only 95% white.)
You are giving the SJWs too much credit:
Blacks overrepresented among criminal convicts => Racist police and judicial system.
Men overrepresented among criminal convicts => duh, I guess that men are just more violent and antisocial.
STEM jobs being 80% male => Misogynist brogrammer creepy nerds driving women away with their dongle jokes and sleazy shirts.
Nurses being >90% female => Women are just more caring and have better people skills.
…
SJWs don’t consistently care about equality of outcome, they only care about it when it advances their political goals.
Those are all strawmen, to varying degrees. The top results for a google of “feminism and male nurses” (e.g. this) are clearly saying nurses being >90% female => women are overrepresented in nursing due to gender roles and this is bad.
Good catch.
That’s cheating:
1) Your link is from a male nurse. It is likely that male nurses would think of this as a problem even if feminists in general do not, but the question is about feminists in general.
2) Your search terms would find feminists who think the overbalance of female nurses is a problem. That says nothing about how many feminists think of it as a problem, just that there are enough to Google. You can’t Google up failure to mention things. You did succeed in disproving the specific claim “feminists think female nurses are more nurturing”, but only that specific claim–not the more general one “feminists do not treat the female nurse and male programmer imbalances similarly”.
@Jiro:
Do you really think that Feminists don’t see female domination of a particular job as a marker that is likely to indicate various negative (from their perspective) things?
It does seem to be a trend (although my confidence in this is not high) that talk of gender balance in professions typically only talks about high-status professions. For example, there doesn’t seem to be much concern about the extremely low proportion of men in childcare or early child education or the fact that men are often actively discouraged by various social and legal mechanisms from participating in such jobs. I’m not trying to come off like an MRA here; I guess the question is what is the ultimate goal: equal career distribution between the sexes for beneficial reasons (gender-equal workforces are more productive/innovative/etc.), or equal aggregate career status between the sexes? If, for some reason, STEM jobs suddenly became very low status and childcare very high status (let’s ignore for the moment whether such status is in part because of the gender make up of such industries), would there be much agitation about gender representation in such jobs? After all, I don’t hear much about how construction or garbage collecting is male-dominated.
And if it is mostly about status (which is my guess), then maybe we should try to re-evaluate our status games rather than simply trying to siphon status from one group to another. Status seeking is pretty zero-sum, after all.
If someone would come up with some general rules for how to discover the Real True Revealed Preferences of millions-strong organizations, that would solve a million internet arguments.
So, any takers for the general solution?
Should we really eliminate competition for status? Isn’t promising people status if they do X a useful way of incentivizing X? The competition for X-related status may encourage people to better themselves, for example if X=fitness, or may have positive externalities, for example if X=career success or X=artistic skill.
I don’t think there is a general solution. You have to find the specific examples in each case that reveal the preferences under discussion.
In this case, that would require finding a high-status profession that is dominated by women, and see whether various sorts of feminists find that to be problematic and what they propose to do about it. I can’t think of anything offhand. But there may be a pretty close analog in college education, which is the default path to most high-status professions. Women make up 56% of undergraduate and 59% of graduate students nationally. Aside from the SJWs, whose views I hope are not representative, I don’t know what various sorts of feminists think and/or propose to do about this disparity.
How about fashion design? I went to an art college with, among other things, a School of Fashion that offers degrees in fashion design and fashion marketing. My graduation ceremony program lists about 200 graduates from that school and among those graduates, 12 have typically male names, 10 have names I don’t recognize, and all of the rest have typically female names. Obviously that’s an imperfect way of measuring this sort of thing, but it still suggests a gender gap that makes the one in STEM fields pale in comparison. Yet the only feminist concerns I’ve seen about gender gaps in the fashion industry are complaints that male designers are, on average, paid more than female designers (cue the familiar arguments about hours worked, taking time off for family reasons, etc.).
The article you liked defends the right for men to be nurses without being shamed, it doesn’t use the gender gap in nursing to claim that nursing is a misandrist industry and demand affirmative action to correct the balance.
It’s an article arguing for the “equality of opportunity” position rather than the “equality of outcome” position that the SJWs selectively apply to things like women in STEM.
There is no “More men in nursing” movement anywhere comparable to the “More women in STEM” movement. Nobody offers male-only internships, scholarships or other affirmative action programs and nobody lobbies for them.
Hence my point stands.
Hmm, you do make it sound like MRAs are ignoring their duties in this matter. I wonder what’s going on.
Why do MRAs have any duties in this matter? They’re not claiming anything negative (towards the men) about overbalances of men, so they don’t have to say anything negative about overbalances of women in order to be consistent.
@vV_Vv:
“The vast majority of nurses feel as though they are overworked, undercompensated and consequently unable to maintain a healthy work-life balance.”
Feminists critiques of nursing will naturally concentrate on this issue, which is broadly comparable to other female dominated professions.
I don’t understand this triumphant crowing about feminists only making a fuss about high-status, high-pay jobs. They’re trying to correct the economical and status inequality between women and men. Deliberately dragging individual men down in income and/or status would be pretty shitty, so they’re trying to push individual women up.
There’s no practical difference. When people talk about high-status and high-paying jobs “needing more women,” they are saying that some of those high-status and high-paying jobs need to be taken away from men and given to women. Usually with the implication that if women don’t want these jobs, then they’re too stupid to know what’s best for them and need to “have their consciousness raised.” If these efforts are successful, they will result in the men that would have gotten those high-status and high-paying jobs taking lower-status and lower-paying jobs instead. It’s a zero sum game.
Not that it really matters, because if the goal is “to correct the economical and status inequality between women and men,” pushing men into lower paying jobs would work equally well as pushing women into higher paying jobs. It’s just that it is easier to put a positive spin on the latter.
I also find the implication that nursing is an inherently less worthy job than, for example, engineering unsettling. In fact, I often find myself wondering if favoring jobs coded “masculine” over jobs coded “feminine” is not itself a form of sexism.
Nursing is extremely important and underappreciated, but that doesn’t raise its status. Jobs that involve physical work, long shifts, dealing with other people’s bodily functions, deferring to someone else’s expertise and kindly interacting with lots of cranky strangers tend to be lower status in our society, whether you and I like it or not.
I suppose that most MRAs reject the “equality of outcome” position.
The phrase “effective altruism” seems to be loaded and do a lot of heavy lifting for me. Does it imply that someone who donates money to a young theatre troupe or artist collective is not being effective with their altruism because there are more worthy causes? What if that person cares about art? I can see how the phrase can be hijacked by organizations that are older and more connected but not necessarily more effective.
This might just be me but I am usually curious about the hidden implications of things.
I am a non-techie who lives in the Bay Area. I am about a decade older than the stereotypical techie (who is usually seen as being between 23-27). I am allegedly unable to figure out Snapchat (I’ve heard people over 25 don’t find it intuitive). I don’t think the tech scene is bad per se but there is a lot of myopia in the scene because they dominate the Bay Area economy without necessarily lifting other economic sectors up (except maybe tasty restaurants and nice bars and coffeeshops). They also have a reinvent the wheel aspect to them and seem flush in investment capital for some pretty dodgy ideas. Wash and Fold laundry services have existed for decades if not centuries. I don’t understand why it is worth millions of dollars in investment capital just because you design an app that can arrange the services through a smartphone app. Uber and Lyft I get. Caviar is trying to expand take-out to include restaurants that were normally sit down places only. That I get. Square I get. Lots of other stuff easily fits into the category of “trying to solve the social problems of affluent 20-somethings” which is producing mixed effects. I honestly predict another bubble burst in 4-5 years if not sooner. Eventually investors are going to want to see returns.
So I am kind of skeptical when the tech community talks about how efficient and effective they are as compared to everyone else. They just have different priorities and worldviews and confuse this for the truth.
With regards to values, most EA’s seem to be pretty explicit about the fact that they’re making decisions according to more-or-less formalized worldviews that other people might disagree with. I’m not totally sure what the rest of your post is trying to get at. The whole point of EA is that we are in fact focusing on demonstratably results over cool-sounding ideas, much unlike the smartphone app bubble you describe. This is actually where the word “effective” does the work you’re asking for. One of the key points GiveWell makes is that most charities don’t get evaluated AT ALL, so you have no idea what, if any, difference your money is making toward your goals.
The exception, of course, is MIRI where you are supposed to trust them and not actually try to evaluate their effectiveness/technical work.
The only posts I’ve seen trying to figure out if MIRI is actually effective are posts like this one form Karnofsky: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
I don’t think there are measurements for effectiveness of a technical/research organization (technical publications/researcher, outside researcher engagement, etc) that would evaluate MIRI as effective. It’s too unproductive.
Us having very long involved conversations about the thing you say we never talk about: https://slatestarcodex.com/2014/10/07/tumblr-on-miri/
I’ve seen that. I got the impression it was mostly 1 non-EA on tumblr using MIRI as an example of an ineffective charity so you got into an argument. So when prompted you’ll defend MIRI, which makes sense they are your tribe. But have you taken an effort yourself to demonstrate effectiveness when MIRI isn’t under attack from tumblr?
But EA is supposed to be about getting outside the tribe. Where are the serious analyses I’m used to seeing in the EA movement? Have you (or anyone?) taken their technical papers to outside experts to get opinions? Made an effort to understand the technical papers yourselves? Or are you dumping money into a may-or-may-not be effective charity without serious analysis because they are your tribe? How is that any different than the philanthropy we criticize?
I wrote a recent post for the MIRI Blog that’s related to your concerns. Some relevant excerpts:
This speaks to the question of whether MIRI is an effective lever on AI risk; whether AI is an effective lever on long-run human welfare is a separate question, discussed e.g. on our FAQ page.
So as a former math researcher
Why aren’t any of the results on arxiv? You have many claimed technical results self-published but you aren’t putting them anywhere researchers can see them.
Most of these appear to be in a handful of the same relatively low-impact conferences (the sort of conferences where getting in isn’t so hard), why not aim for higher impact conferences? Why don’t you have many papers with your listed research advisers as co-authors? Why do so many of your affiliated researchers have no publications with you?
You’ve had at least 1 full time researcher and often more, going back many years now. Where are all the technical results from that time period? Is it really just this small handful of things?
We have three AAAI publications (although one of these falls outside our main research focus), two papers uploaded to arXiv, and another going up on arXiv shortly. I think your criticism is basically correct, though: MIRI didn’t publish as much (especially to arXiv) as it could have and should have. That’s changing pretty rapidly, and I think we’re a good giving opportunity for EAs who agree with us on a lot of factual and normative claims, but I think other EAs should wait and see what we’re able to produce over the next year (and wait for more third-party evaluations to come in).
I said on the blog post I linked, “Our current plan this year is to focus on producing a few high-quality publications in elite venues.” If we don’t continue getting results into substantial conferences like AAAI over the coming months (e.g., cleaned up and expanded material from our research forum), we’ll have failed on our own terms.
Going into a bit more detail: We’ve been around since 2000, but 2013 is when we pivoted to mathematics research (also coinciding with our name change — see Luke’s 2013 strategy post), and early 2014 is when we expanded to a three-person core research team in a stable way. So pre-2013, we didn’t have many organizational resources going into mathematics research/write-ups. Figuring out which formal tools were relevant to our informal concerns was also a slower (and less publishable) process than developing and applying already-identified tools.
A partial explanation, if not an excuse, is that posting more of our results to arXiv has been important and obvious low-hanging fruit for a while, but there’s lots of other important and obvious low-hanging fruit: attending conferences, hosting conferences, running workshops, visiting industry groups, and (especially) recruiting. I’m pretty happy about how much Nate and Benja have gotten done since they joined the team in March 2014, and we’re taking on additional researchers who should make it easier to delegate different tasks to different parts of the team, rather than requiring everyone to do a bit of everything; but I know Nate feels we can get more done per researcher-hour than we have in the past.
Re the research advisors, we’ve written eight papers with Bostrom and Yampolskiy. If you’re wondering about Russell and Selman (since they’re more prominent within AI), they’ve only come on as research advisors in the last two months.
Your website lists, beyond Yampolsky and Bostrom:
Tegmark
Hanson
Omohundro
Looks
Drescher
Yampolsky
You have 5 papers with Bostrom (one of the least technical advisors), 3 papers with Yampolsky but they don’t look technical at a first perusal, none with Hanson, none with Omohundro, none with Looks, none with Drescher. These aren’t new people, they’ve been on your website for years. I note Ben Goertzel was once tied to SIAI and you have no publications with him.
I found the second paper on arxiv “Toward Idealized Decision Theory” and it doesn’t appear to have a result in it. The first at least has a (fairly trivial) result in it.
And this “2013 pivot” seems disingenuous. You’ve had at least one full time researcher since inception. I know you had a visiting program running back in at least 2010. Saying “we weren’t trying until 2013” seems like way to reset the clock.
Could you explain what you mean? “MIRI’s Strategy for 2013,” which I linked to above, was posted in April 2013. It explicitly states: “We were once doing three things — research, rationality training, and the Singularity Summit. Now we’re doing one thing: research. Rationality training was spun out to a separate organization, CFAR, and the Summit was acquired by Singularity University.” It then describes four kinds of research MIRI has done in the past: “expository research,” “strategy research,” “FAI philosophy problems,” and “FAI math problems,” and says that beginning in 2013, MIRI will start pivoting to focus mainly on FAI math problems.
We did research before that point, but it was mostly nontechnical. I’m not constructing an ex-post-facto excuse for why we’ve been more productive in technical research in the last couple of years than in the past; I’m citing a strategy we decided on, announced, and then followed through on.
I think SIAI made plenty of mistakes (see: Holden’s post), and MIRI certainly can’t claim credit for SIAI-era accomplishments unless we also accept blame for what we did wrong. But the only two staffers we still have from the SIAI era are Eliezer (who’s been here from the start) and Malo (who came on in SIAI’s last four months). Alex came on in 2013, Nate, Benja, Katja, and I came on in 2014, and Patrick and Jessica came on in 2015. That’s a big part of why our recent activities are a better barometer for what we’re likely to do in the future.
(And my main goal here is to disentangle ‘how well-chosen were MIRI’s goals?’ from ‘how well did it achieve its goals?’, and to explain why we think we’re likely to be successful in our near-future plans. I’m not really trying to argue for a specific position about how high-quality or low-quality SIAI/MIRI’s decisions have been on average since 2000; the best people to do that are probably Eliezer (since he was there) and informed third parties (since they’re more likely to be neutral).)
The result in “Toward Idealized Decision Theory” is an implementation of proof-based updateless decision theory, plus blackmail games for evidential and causal decision theory. Evidential blackmail and retro-blackmail are both novel, and I hear that the academic decision theorists at MIRI/CSER’s Cambridge conference found them very interesting. At this point the result really qualifies as formal philosophy (i.e., decision theory as usually studied), rather than mathematics; we’re planning on writing up the associated mathematical results this year. (The relevant search term for the math results is “modal udt” on the Agent Foundations Research Forum — e.g., Using modal fixed points to formalize logical causality.)
The “Robust Cooperation on the Prisoner’s Dilemma” paper establishes unexploitable cooperation on the one-shot Prisoner’s Dilemma without communication. I’d be very interested to hear why you think this result is trivial! This is something three researchers (incl. Moshe Tennenholtz) independently tried for and failed at. We found the Löbian trick surprising, as well as the fact that it’s possible in polynomial time to figure out the behavior of bots that reason about each other using full PA proofs.
You’ve only picked up self-citations on your prisoner’s dilemma paper because it’s not interesting.
Tennenholtz has unexploitable cooperation by directly comparing source code. Tennenholtz’s cooperation works in realistic situations, I can code up a piece of python that can check if another piece of source code is identical and cooperate/not cooperate.
Your cooperation only works in a sandbox where you have a simple formula describing the behavior of the source code. But figuring out what a program does given it’s source code is the actual challenging part. I don’t think anyone ever doubted that if you had an oracle that could reduce any source code to a simple expression that robust cooperation is possible.
So sure, if you either have an oracle to reduce source code to modal formulas or if you code up your prisoners dilemma bots in a sandbox that forces simple formulaic implementation, then you can cooperate. That is not interesting and not surprising.
There are bounded versions of the “Robust Cooperation” result, with no need for an oracle, that demonstrate robust cooperation for bounded reasoners that act only according to what they can prove about their opponent. That’s an interesting consequence of the “Robust Cooperation” paper that seems practically relevant to the hard problem of self-referential reasoning in programs.
Importantly, we aren’t focusing on programs that reason about programs in full generality; we’re using specific restricted (but still very powerful) reasoning systems like PA. What we found interesting is that agents restricted to reasoning based on statements that are provable from the source code can use short PA proofs to achieve robust cooperation. We didn’t expect to get robust cooperation that can be determined in polynomial time using PA.
If you think the contents of “Robust Cooperation” would have been immediately obvious to you upon initially hearing the question, then I’m highly impressed and would invite you to contribute to this area! 🙂
I expect that we’ll talk about the result with Tennenholtz and other people working on program equilibrium before too long, and I expect that they’ll find these new results interesting. If you’d make a different prediction, empiricism will have to settle this one for us.
@Rob Bensiger I’ve shared your paper with several people I work with (2 physics phds, another math phd, some CS phds) everyone seems to have said it’s an obvious result. The physicists were surprised it was publishable, but different fields have different standards.
If people DO find it interesting and worthwhile,why aren’t they citing it? The issue HAS ALREADY been answered empirically, and the citation count is firmly on my side.
And I think you would actually be happy to have me working with MIRI, but I’m probably too expensive for you guys :).
I don’t know if you’re involved in EA circles (or perhaps different circles from the ones I’m in), but I have lots of in-person conversations with other EAs about different charities, and some of these are about MIRI, and we frequently talk about whether it’s effective and how we can know this and what we can do to get a better picture.
I’m sure tons of people take MIRI’s effectiveness for granted, and tons more take its non-effectiveness for granted. But there are also a lot of people who think it could be effective but aren’t sure, and are trying to figure it out.
Probably different circles. Most EA specific people I know learned about EA from Less Wrong and take it for granted that LW’s technical research is top notch.
So what is Effective Altruism for? If it started out with a bias towards “We need to fund research about AI risk/existential risks”, and is now moving away from that, what is it moving towards?
General “We can show how to maximise charitable giving and acting for the most efficient results”? But it seems like it has bigger plans than that; look at the website statement:
Well, great gosh a-mighty! That sounds big and special! What are these world’s greatest challenges?
I don’t know; at least a part of the audience was too busy sighing about the menu to worry about the rest of the world (except for worrying about the cluck-clucks and moo-cows that got etted at the meals provided).
I know banging on about this seems like flogging a dead horse, but really: you couldn’t agree to disagree for three days at a conference dedicated to saving the world and fell away because people were eating meat-based dishes? A movement that is not, so far as I understand, explicitly vegan even if a lot of its supporters support veganism?
I’d love to think Effective Altruism would be something that works. But at the moment, it looks like what Saul is saying: a bunch of relatively well-off techies in a particular area being enthused over a new, re-invent the wheel group (because this time, they’re going to do charity right) that probably is going to splinter into little factions over the likes of the above (veganism is the most important thing because animal rights! no, optimising QALYs with malaria nets! no, AI risk! no, other thing is most important thing!) and will come to a natural end in a couple of years.
Of the featured speakers at the Google bunfight, the only two I’ve ever heard of are Peter Singer and Elon Musk, and I only know Elon Musk because you lot on here keep mentioning him in tones of breathless star-struck adoration. Who the hell is “Olivia Fox Cabane, author of ‘The Charisma Myth'”? That guy founded Skype? Okay, heard of Skype, wouldn’t know him if I fell over him.
If I went on here raving about “Roy Foster is giving a talk!”, how many of you would have the first clue who I meant?
Right now, Effective Altruism Global seems like a navel-gazing group like the intellectuals Muggeridge talked about in the extracts Scott posted (you know, the ones who burned the title deeds to their little commune). If it expects to get anywhere, it needs to decide what it stands for (so no more of this hair-pulling over “We’re Officially Vegan!” “No, we’re not!” nonsense) and what it intends to do.
If that’s going to be “working on AI risk”, fine. Very niche interest, but go for it. If that’s going to be “working with already established charities to help them become more effective and efficient”, excellent. But if it’s going to be “teaching your grandmother to suck eggs because we’re technical innovators from the Bay Area, source and summit of all human excellence today!” – then it will disappear up its own backside.
Elon Musk is a co-founder of PayPal, which gave him enough money to co-found Tesla Motors (electric cars) and SpaceX. He goes on about some pretty big, visionary stuff, but pencils out the numbers enough so that he’s backing stuff which would make engineering sense within a few years. So since he’s pretty smart *and* rich, having him lends a lot of legitimacy to anything he gets involved in.
Without having read the EAG website, or anyone else’s, the impression I get is that the EA types are mostly going to figure out which existing charities are getting results, and encourage people to fund those. Helping other charities be more effective seems outside their remit right now.
Genuine question: wtf are you trying to convey here?
Implying that the main concerns of a nominally rationally driven group are mostly related to their feelings, presumably.
This seems disingenuous given that the EA animal rights group have philosophically rigorous, utilitarian defenses of the position.
Genuine question: wtf are you trying to convey here?
That their complaints are couched not in “philosophically rigorous, utilitarian defenses of the position” and some of the charges made about the non-vegan catering go beyond dispassionate statements and veer into emotional reactions and accusations of conspiracy, fraud, malice and insult, so I’m using over-blown and infantile language to demonstrate how this sounds to outsiders who are neither vegans nor altruists, to convey how ridiculous they are making their position sound. People who want to teach the rest of us, including established charities, how to do it all better because they are smarter and more rational and so can figure out how to do it better than the people already doing the work couldn’t manage to eat together without making moral judgements of each other’s food choices and taking it on a personally insulted level, rather than applying that rationality and smartness to the problem.
That the rationalists who felt their money had been taken from them under false pretences to provide factory-farmed meals were acting out of sentiment, not appraisal of the evidence.
All the money charged for the meals went to the catering, presumably. So the meat-eaters and non-vegans also paid for their meals. Now, unless they have evidence that the meat meals were more expensive than the vegan meals, or that the money paid by the meat-eaters didn’t cover the cost of their meals, so it was taken from the money paid by the vegans, then they are not assessing the evidence, they are making the same “I don’t want my tax dollars going for nuclear weapons” arguments people make, which sound great.
Except tax money, and conference ticket money, all goes into one big lump that is then divvied up to pay for everything. If you’re going to penny-count for what you did and did not agree to pay for, some of the vegans could well find they didn’t pay full whack for attending the talk they were interested in and some of the meat-eaters money covered that.
That was a mocking comment, and meant in mockery, and having cooled down I should not have used it, but I react very badly to emotional manipulation and the invocation of tearful upset to make people feel bad for having violated what was not, after all, a contract (if it was a contract, then let them request the organisers to refund their money on the basis of breach of contract) makes me react with equal indignation.
Broadly, my problem was this: these are people who are self-selecting for being intelligent, rationalist, and evidence-based decision makers. They were upset by something that happened. Well, they’re human, they have emotions. But did they surmount that emotional reaction to be able to compromise for a limited time with what were, after all, like-minded actual and potential effective altruists? No, they did not take their problem to the organisers, they complained elsewhere about feeling hurt afterwards and dropped away rather than try and get things done.
That’s not very effective, in a movement which is based on “doing things better”.
Wait But Why posted a colossal, two-part article of everything you need to know about Elon Musk.
Elon Musk: The World’s Raddest Man
How Tesla Will Change The World
tl;dr When deciding on a college major, Musk wrote down a list of what he considered humanity’s 5 most important projects: internet; space colonies; green energy; AI; and human genetic modification. He accomplished the first item by founding PayPal (sort of). After leaving PayPal, he used his PayPal fortune to found SpaceX, which aims to accomplish the second item. While running SpaceX, he founded Tesla Motors (kinda), which aims to accomplish the third item. Concurrently managing SpaceX and Tesla Motors brought him to the edge of personal bankruptcy at one point. His most recent shenanigans include building a lithium battery factory for his Tesla cars.
He’s attempting to save the world single-handedly. And it’s difficult not to be impressed by how far he’s gotten.
“Save the world single handedly”. As someone working in the space industry you have NO IDEA how much that, and the general Elon hero-worship, grates. He has literally thousands of people working to turn “hey big cheap rockets are neat” into nuts and bolts reality, many of whom have forgotten more about rocket science than he’ll ever know. And SpaceX is doing great work, but has also had to learn through failure a few hard lessons that others learned decades ago because Elon’s ego was too damn big to admit that maybe the LockMarts of the world weren’t complete idiots, even if they charge too much for their rockets.
And for all the talk of how brilliant SpaceX is, they aren’t actually doing anything THAT innovative. Bold, absolutely (and that’s important!). But ultimately they’re flinging conical space capsules on kerosene LOX rockets, the hard problems of which were solved 50 years ago.
The cost savings of SpaceX come mainly from vertical integration (not a new idea, but unique to US space companies), paying for all the R&D out of Elon’s pockets and not worrying about recouping the costs, and running en engineering sweatshop (effectively mandatory 70+ hour workweeks for comparable pay to companies where 40-50 hrs is the norm).
Elon is a visionary, engaging some interesting problems in tech with bold strategies. But accomplishing that vision takes a lot of very smart talented people, so when you say “Elon is launching a rocket today”, well, it’s a bit insulting.
This, from a position one Aerospace corporation down the street from SpaceX. Musk is doing good, important work in the field of private-sector space exploration, while bragging about it like a raving egomaniac. Other people are doing more and better work without the bragging. All of them together, have a fair shot at building cities on Mars in our lifetime.
And I feel a little bit guilty every time my thirty-second elevator pitch for private-sector space exploration starts sounding like a Musk hagiography, but he’s monopolized the public discussion so much that sometimes there is no alternative. Here, where we do have more than thirty seconds, we can and should do better.
Also, I should probably pick a nit with the 70-hour workweek thing. My employer picks up a lot of their burnout cases by offering a better work-life balance, but the long hours aren’t actually mandatory (at least after the first year or so); Elon just has the Silicon Valley mojo for applying social pressure in that direction and it takes unusual self-assuredness to resist.
And, Elon is paying young engineers to work ~70 hours a week to build spaceships that may go to Mars, for the same salary that they could earn for 40.0 hours serving as headcount in a cube farm making powerpoint decks for their boss’s boss’s next presentation. Most of them understand this is the trade they are making.
He “accomplished” the internet? What does this mean?
It means he knew that the internet was going to be a big deal, and took advantage of the Zeitgeist.
John, am I allowed to know which corporation(s) you refer to? Or is that too personal. I’d be interested in hearing your more than 30 second pitch.
In the space transportation field, the big one is Blue Origin. Founder Jeff Bezos is as secretive in his astronautical enterprises as Elon Musk is exhibitionistic[*], but it appears that their goal is to revive and complete the 1990s-vintage Delta Clipper reusable orbital transportation system. Along the way, they have developed the expertise and credibility to have been contracted by Lockheed and Boeing to develop the first-stage engine for their next space launch vehicle, a direct competitor to SpaceX’s Falcon. Suborbital test flights for about a decade now, orbital when they are ready.
On a somewhat smaller scale, XCOR Aerospace, formerly of Mojave, CA and now Midland, TX. Founder Jeff Greason was I believe a single-digit millionaire when they started working towards reusable orbital transportation almost twenty years ago, which has made for slower progress on a smaller scale. He’s also willing to talk about what he is doing, with somewhat less hubris than Musk. First suborbital flights expected late this year or early next, and they have startedtalking semi-publicly about the details of their follow-on orbital system. Also bits and pieces of outside work for other aerospace companies, including developing the upper-stage engine for the Lockheed/Boeing vehicle mentioned above.
And Greason had the credibility to be invited to sit on the Augustine commission, which convinced the White House that it was long past time to start shifting NASA’s routine transportation work to the private sector. This is directly responsible for about half of SpaceX’s early business.
The UK’s Reaction Engines Ltd has a potentially revolutionary engine they hope to turn into an orbital spaceplane; I think they are maybe technically sound but unbelievably optimistic on the business side, but if they do pull it off Musk had better hope Tesla brings him lasting fame. There are some other small players; I’m not going to name them all. Even the old big-aerospace dinosaurs of Boeing and Lockheed-Martin are showing signs of renewed vigor and innovation. Outside the private sector, DARPA is doing some interesting work with its ALASA and XS-1 programs.
Space transportation is not the end of the story. Logistics is also critical. There the big name is Bigelow Aerospace, which bought out some old NASA space-station technology not incorporated on ISS. Given the founder’s background I suspect his ultimate goal is or was a grandiose space hotel, but for now he’s offering more utilitarian space stations to commercial and government customers. Suffered from an early misstep in that he predicted SpaceX et al would have ships capable of reaching his space stations earlier than they did, and so very nearly ran out of money, but they’ve put up a few small testbeds, they have a contract to put an expansion module on ISS, and we will desperately need them when we are really ready to go beyond low Earth orbit.
Canada’s MDA (the space shuttle robot arm folks) and the US DARPA have been independently developing systems for refuelling and servicing spacecraft in orbit, again critical for transportation beyond LEO. MDA hopes to make a profit in the meantime by refueling or refurbishing communications satellites in orbit.
A bit farther out, the Shackleton Energy Company hopes to build a refueling station at Shackleton Crater at the Lunar south pole. This appears to be the nearest and most accessible source of rocket fuel beyond Earth, the utility of which should be obvious. They are sticking to low-level technology development for now, I suspect trying to avoid Bigelow’s issue of running out of money before their customers have spaceships that can reach them. I don’t know if they will last. They aren’t the only people with their eyes on that particularly valuable chunk of real estate, so whatever work Shackleton does now is likely to be bought out and used by somebody if they go under.
Paragon Space Development Corporation, founded by two of the more sensible veterans of Biosphere 2, is appropriately working on life support technology for long-duration space mission. And I’ll throw in a plug for Orbital Outfitters, developing space suits more comfortable and affordable than NASA’s twelve million dollar arrangements of BDSM gear.
That should give you a start; I wouldn’t care to predict where this ends. Quite possibly with 10^54 human inhabitants of the Virgo Supercluster Federation.
[*]In fairness to Musk, Bezos is rich enough that he can probably self-finance his complete program to profitable reusable orbital spaceflight whereas Musk needed (and got) investors and customers at a much earlier stage. This also pushes Musk to deploy an expendable launch vehicle with only modest levels of cost reduction at an early stage and hope to incorporate reusability later, which is I think a harder and riskier strategy in the long run.
@John Schilling
Musk is doing good, important work in the field of private-sector space exploration, while bragging about it like a raving egomaniac. Other people are doing more and better work without the bragging. All of them together, have a fair shot at building cities on Mars in our lifetime.
* applause *
“And, Elon is paying young engineers to work ~70 hours a week to build spaceships that may go to Mars, for the same salary that they could earn for 40.0 hours serving as headcount in a cube farm making powerpoint decks for their boss’s boss’s next presentation. Most of them understand this is the trade they are making.”
I work on a set of problems that, while not quite as grandiose as going to Mars, are still highly likely to be interesting and challenging for the typical aerospace engineer. My typical workweek is < 45 hours, with if anything pressure to work LESS.
At SpaceX, long hours are only "not mandatory" if you never care about being promoted, given a raise, or assigned meaningful work. And I'm sure there is plenty of mindless PowerPoint jockeying – their biggest customer is NASA after all.
Finally, for all the praise of Musk as a bold independent innovator, his company's are deeply dependent on government subsidy. Half the Falcon 9 development cost was paid by NASA. Tesla and Solar City (and their customers) have gotten hundreds of millions from CA and the Feds despite up to now only producing status symbols for the rich (average Tesla ~$100k, affordable model always seeming to get pushed to the right…)
RE: your bullet list
– the Internet: PayPal is doing what now? It’s nowhere near as revolutionary as Bitcoin (which I still do not understand in the slightest how it was meant to work, even before this whole court case and Mount Gox meltdown). It’s a money-handling service for online transactions, and a very convenient one (I use it myself) but it’s not flawless (I’ve seen a lot of complaints about PayPal and how it treats small businesses). PayPal is a service that works on the Internet but it’s not doing anything to change the Internet.
– SpaceX. This is our best hope for interstellar colonisation? I really would hope not, seeing as how it’s still in the very, very early stages. I’m not even seeing anything approaching a credible private-enterprise moonshot with unmanned modules here, and the Moon is on our doorstep. A very long-term prospect is the best I can say about it, and the trouble with one visionary figurehead funding and being a one-man band himself is – what happens after he drops dead or gets forced off the board?
– Green energy. Holy moly, electric cars are going to revolutionise energy consumption? I’m more interested in domestic and industrial energy provision where that’s concerned. I’ll sign up for an electrical car once the whole self-driving car thing has been cracked, and realistically at this stage they’re only usable and dependable for short in-town trips, not commuting or long journeys. Again, not the big huge world-changer you seem to be implying, though we can certainly argue the toss over “Yes, but if we get cities like Bombay or Beijing using electric vehicles instead of internal combustion engines, think of the decrease in pollution!”
– AI. Take my eye-rolling here as read, all right?
– human genetic modification. I apologise, and this is not personal animus about Musk himself, but my general feelings when people start babbling on about this is “Fuck the fuck right off”. Yes, I’ve got biases and prejudices out my ears. But this is something I think we need to be very damn careful about, and that amateur dabbling is not the best notion, and we need better than a “better to ask forgiveness than permission” attitude when it comes to “And now we’re going to implant in surrogates the modified embryos and see what happens if and when the foetuses are brought to term, what could possibly go wrong here with treating human children as test subjects?”. Yes, I know: shocking revelation of pro-foetus attitudes, now if it were chickens we’ll all be right to be enraged because chickens are living, loving, experiencing creatures unlike foetuses which nobody knows what they’re like.
So Elon Musk is good at making money (at least with PayPal). He’s an entrepreneur, so he can advise on fundraising. If EA stuck with that, I’d be happy enough they know what they’re doing.
But this whole “He goes on about some pretty big, visionary stuff” is what worries me. I don’t know enough about Tesla to know if it is feasible or not, but electric cars are still – even though they’re very slowly becoming more widespread – getting off the ground. His Hyperloop proposal, as I mentioned, invoked Concorde in all but name, yet seemed to show no acknowledgement that Concorde in the long run (and it was in operation for 27 years) didn’t work and is now out of service and out of production.
So from my point of view, he’s someone who did well on the financial end of things by being involved in creating a tangible money-handling service for online transactions, and is now throwing his money around on pet projects that probably won’t pay off for years, if ever.
And he’s not there as the practical nuts-and-bolts adviser on “This is how you screw cash out of corporate donors”, but as some inspirational guru whose expertise on “He’ll know how to make this work!” is taken for granted.
And that’s part of why I think Effective Altruism Global is a very worthy effort but unless it gets a very quick sharp dose of reality is going to end up splintering at the grassroots (excuse the mixed metaphors) when the supporters are more concerned with going off in different directions with their major concerns (veganism versus AI risk versus global poverty versus who the hell knows what) and from the top it will die through lack of leadership when the people who are interested move on, die (sorry, but this happens) or go bankrupt.
Even if Tesla and SpaceX don’t ultimately succeed, he’s already accomplished more than I will in my lifetime. I admire his determination to set big goals and follow through on emminatizng then.
If I’m not mistaken, Paypal was the first service to figure out easy, convenient online payment. You could argue that someone else would have done it pretty quick if he hadn’t, which is probably true, but that doesn’t change the fact that it was a risky gamble and a serious innovation.
Re: Green energy – isn’t he doing something in solar? I’d guess that’s what the reference was about.
>> Does it imply that someone who donates money to a young theatre troupe or artist collective is not being effective with their altruism because there are more worthy causes?
My understanding, based mostly on an NPR interview, is that EA would not consider those donations effective. The interviewee (whose name I unfortunately can’t remember) was then asked about donating to NPR and had a fairly hand-waiving answer about why that was different, but I assume he was just being polite.
My own beliefs would go further, and say that most cultural type donations aren’t really altruism or charity at all, but rather a sort of indirect consumption. If you and all your friends like going to the opera, it costs $1M a year to run the opera, they price tickets such that they total $.75M a year, and then they throw a big party where you and all your friends are socially pressured into kicking in the last $.25M, that’s not really charity — it’s just a strange kind of opera ticket.
Sometimes it’s more blatant than that. When I actually went to see a production of the Ring Cycle at my local opera, on top of the nominal ticket price I had to make a “required donation” to the opera nearly equal to that nominal ticket price, and the “required donation” part was tax-deductible. I bow my head in admiration for the ingenuity of the opera’s lawyers and tax accountants.
I literally just bought my tickets for said party. I’m excited.
Skimming the news, it looks like the Met has an operating budget of $327m (2013) but only brought in $93m in ticket revenue that year. Add in the $17m in revenue from TV (or something like that) and they are bringing in almost exactly one third of their budget. They apparently raised ticket prices by 10% for that season, and had a significant fall off in attendance which ended up chopping about $6m out of their till.
From this we may deduce that the people actually buying the tickets are very price sensitive. They are so sensitive, that raising prices reduces revenue. However, there is a very small collection of ticket buyers that are not price sensitive: those are the people you just tack an extra zero or two [didn’t Scott just tell us to stop adding zeros?] onto the end of the price for a box seat. Nevertheless, it it is clear that the Met cannot operate with just ticket revenue.
If making the operatic experience available to the masses is a good thing, donations are necessary. And the majority of the people making the donations are not the same people that are buying the tickets.
The EA people would say making the operatic experience available to the masses is just not that important.
To that I’d add that if you are motivated by making the operatic experience available to yourself, and making it available to the masses is just a side effect then that’s not particularly altruistic from a virtue ethics point of view.
In any event, I don’t see anything wrong with donating to the opera, I don’t subscribe to the basic EAish axiom that not donating to starving children in Africa is immoral. If it makes you happy to donate to the opera, or your alma mater, or your church — go for it. The upshot for me is the government ought not to kick in tax money in the form of deductions.
The government shouldn’t be allowing tax deductions for charity anyway. The government should be providing those goods that can’t be funded by private purchases and are important enough to justify taking money by force.
If you have two people in similar circumstances in terms of taxation fairness and one of them decides to donate money above their taxes, that doesn’t mean they deserve a subsidy on their consumption of things like defence or environmental protection from the other any more than they are due a subsidy from the other on their grocery bill.
Although I didn’t go that far in the post you respond to, I agree. I’d support eliminating the charitable deduction.
To my mind, this is the perfect selfish cause that the EA 10% line preserves the option to contribute to. I can pay my moral dues, and then focus on making the operatic experience available (to me and the masses). Yes, it might be more moral to continue donating to alleviate extreme poverty, but at a certain point we can stop kicking ourselves.
Here in Germany, we have quite a number of operas, and the cheap seats aren’t much more expensive than a movie at the cinema. Of course they’re heavily tax-subsidized: they’ll frequently make less than 5% of their budgets on tickets.
This leads to the funny situation where if you go the opera fairly frequently, i.e. are probably upper class, you’re actually consuming more state money than if you were on welfare.
As a student I can, with a bit of luck, turn up to these sorts of things two hours before the show begins and get tickets (Abendkasse) for just 10€! That’s actually less than you would pay at the big cinemas here. I actually avoided going to the Berlin Philharmonic for more than a year just because I thought I would have to eat noodles for a month just to get the ticket. Arts funding is one thing I think Germany does extremely well.
You have to choose your own goal but the point is that most people don’t act consistently with achieving the goal they state.
Of course a lot of the time people don’t like admitting their real goal. They might say they care about art but what they really care about is trying to impress that cute artsy painter girl they were with at the time. Or they just liked the face of the person they gave the money to and they like the immediate short term social payoff: less atruism and more buying good feelings and social standing.
If you prefer that then EA is definitely not for you.
You care about art? Only art close to you or any art anywhere? Can you think of any other groups that might spend the money better? Would putting a young artist through a couple of years of college in india help art more than funding a few days of a local troupe’s expenses?
Does your circle of concern extend only to your local town limits? In that case might giving some money towards a local project getting kids interested in art achieve more?
It makes a lot of people deeply deeply uncomfortable to really think about what they actually care about.
Lots of EA’s go with something like minimizing human suffering, saving as many lives as possible etc but you’re not required to go with one of the set ones. (though it does help if you can find a group with very similar goals to your own)
Possibly the hardest question is : “what do you actually want”
When I first heard about Effective Altruism (or what would become EA) this was exactly how the ideas were discussed and I found it incredibly valuable. The idea that when choosing charities you should evaluate how effectively they achieve their purposes was for me, at the time, revolutionary.
This praise also comes with the criticism that it seems like this is no longer how Effective Altruism presents itself. Rather the movement seems to make several normative assumptions about how we all should think then goes about trying to optimise charity in that direction. A good example here is that we should value charity the same, independently of where it occurs.
All this is to say that your comment is excellent but unfortunately exemplifies an impression I no longer get from the movement as a whole.
Donating money to a theater troupe can be effective and altruism but isn’t effective altruism – it may be part of maximizing the fulfillment of your values and it helps other people, but it doesn’t help others as much as possible. You trade off the effects of your donations on well-being against other things you care about.
“What if that person cares about art?”
Then the motive for his donation isn’t altruism (concern for the welfare of other people), it’s concern for art.
Artruism?
We need beauty to be human. Concern for art is concern for the welfare of other people.
Granted, a lot of it is status signalling “see how cultured and wealthy I am” dressing up for exclusive events as part of your social circle’s rituals snob appeal, but nevertheless: the world is better for the music of Mozart, or the painting of Fra Angelico, or the poetry of Yeats, or dance, or other forms of human non-technical creativity.
…Speaking for myself as an artist, the kind of art that requires generalized donations (as distinct from patronage) is usually not the kind that flourishes humanity.
Disinterested patronage ain’t too hot, either. Around where I work, zoning requirements encourage the builders of business real estate to buy public art; they don’t stipulate anything about its style, content, or quality (though I have no idea how you’d stipulate the latter). So what you get is a lot of bland, ugly abstract sculpture made from sheet metal. Occasionally, if you’re lucky, you might get a bronze statue but it’ll still be bland and ugly.
I think the arrangement has negative value overall, but no one wants to look like they defunded the arts, so it persists.
Good point. I see the “generalized” and the “disinterested” as being much the same thing: if you are giving money to “Art” as this nebulous, indistinct mass, you are going to get shitty art. Good art has value, and people are willing to pay real money, and specifically *their own* money, for art that provides them with real value.
Good art’s law: when “giving money to art as an indistinct mass” becomes a target, it ceases to be good art?
Ba-doum Ksssh
Art is nice but if you’re funding art you’re aiming your money at helping people who are already tapdancing on the summit of maslows hierarchy of needs.
Oh, public art is dreadful, I quite agree.
Murphy, I don’t think I’m anywhere near the top of Maslow’s Hierarchy, but I am very grateful for our state-subsidised national radio service which includes a classical music station. That helps keep me the tiny bit sane I’m managing to hold onto.
People all around the world dance, sing, create and consume art.
Tracy W – and the vast majority of them don’t need an NEA grant to do it, either.
Maslow’s hierarchy of needs is bullshit. A poisonous meme of false information on the order of the Food Pyramid. (Why is it always triangles? I blame the Illuminati)
@suntzuanime – that’s a strong claim. could you elaborate?
@suntzuanime, It goes so much deeper than the Illuminati. The problem obviously started with Pythagoras.
What’s to elaborate? The claims it makes are false, and lead people to false conclusions like “if you’re funding art you’re aiming your money at helping people who [have great lives in every way]”.
@suntzuanime
*mostly* great lives.
You mean the claim that people who are strugging to breath, are starving or freezing to death tend to care more about the next breath than the lack of spontaneity , creativity and art in their lives?
An interesting claim to oppose.
If you spend a few thousand dollars funding an art project you may indeed make some peoples lives slightly better but the opportunity cost is still some dying kids in a much much worse situation.
How is my claim false?
Effective Altruism, at its core, is about taking consequentialism seriously when trying to good. An EA whose values place a very strong emphasis on the arts would make a real effort to donate to the organization that would promote the arts most effectively per dollar, not just donate to the nearest or most salient one.
In the same way that if your hoal is to alleviate poverty, donating to Give directly >>> volunteering at a soup kitchen, there is probably a much more effective way to promote the arts than giving to a local theater troupe.
I would guess that almost nobody who has actually thought hard about it thinks the arts are *really* more important than poverty, x-risk and animal suffering, so nobody has done the hard work of figuring out what the most effective way to promote the arts is.
I think you’ve explicitly said this before, but this sort of reasoning is also behind my instinctive belief that the principle of charity is never over-applied. I might vehemently disagree with a lot of Neoreactionary thought, but I can’t sympathize with calls to quash it in favor of yet more forums filled with traditional Democrat/Republican/Libertarian/Moderate debate on every single issue.
(I suppose the counterargument is something like Apophemi’s essay, and so by analogy the counterargument to AI risk is basiliks. Wait, no! Let’s go with opportunity costs instead.)
Off-topic, but what are your thoughts on this article, Scott? http://www.theatlantic.com/magazine/archive/2015/09/the-coddling-of-the-american-mind/399356/ SJ trends interpreted via cognitive behavioral therapy.
I find that I’m far more skeptical of CBT then I was previously, especially when it is used in such an all-encompassing, almost psychoanalytic, way.
So they kind of have what looks vaguely like a point, in that as per CBT the best way to deal with an anxiety-provoking stimulus is to just deal with it and realize it’s not as much of a catastrophe as you expected, and the more your try to avoid it the worse it gets. And I can see how they relate that to their thesis. And it’s not completely far-fetched.
On the other hand, CBT works inconsistently even in exactly the situation it’s designed for, and I am wary of drawing sweeping generalizations from there to society in general. To give an example – there are certain therapies where you treat PTSD patients by “retraumatizing” them, ie forcing them to relive the trauma in a certain very safe environment and then processing the memories more thoroughly than normally. But if a bad therapist retraumatizes the patient without producing the right context first, the patient just ends up even more traumatized. If someone were to generalize the idea of this therapy to “we should go around traumatizing Vietnam vets!” they would be super missing the point.
Also, you do not give therapy to people without their consent.
If I really wanted to do the sort of thing this article is doing – and honestly, it boggles me to no end how people (and magazines!) who are totally happy to swallow every other drop of the Kool-Aid choose trigger warnings, of all things, as their hill to die on – I would care a lot less about what kind of therapy helps people overcome being triggered by the word “meritocracy”, and a lot more about how we got into a situation where we’re debating whether hearing the word “meritocracy” causes the same kind of trauma as a Vietnam vet hearing a gunshot in the first place.
OK, I won’t dispute this applies to the Atlantic, but I’m sure you’re familiar with both of the authors (CEO of FIRE Greg Lukianoff and Jonathan Haidt of moral foundations theory fame), and neither is the Kool-Aid type.
I think we can see a pushback from the left, but it’s coming slowly, one timid step at a time. It’s annoying, but I don’t blame the people with careers to worry about.
Lukianoff and Haidt have not drunk the Kool-Aid, but they are clearly trying to formulate their argument in such a way that it might be convincing to some Kool-Aid drinkers. That’s a little cynical, but maybe justified — much as one might have tried to convince the residents of Jonestown that Jim’s Greater Glory was best served by sticking their fingers down their throats.
I don’t think treating a wide variety of ideas with respect and intellectual interest is a particularly widespread value, and therefore it’s not going to be an effective defense when accused of being a sympathizer to an idea that is viewed extremely negatively. You aren’t supposed to be treating it with respect and intellectual interest because everyone already knows it’s wrong and terrible.
It’s very possible to combine disrespect with intellectual interest. I’m almost obsessively interested in the thought of certain sectors on their own terms, but so far no one has accused me of adhering to them.
There was a recent thread comment here where someone expressed that her main complaint about EA was that it was all veganism, all the time. (Whoever wrote the post I’m thinking of: hope I didn’t misrepresent you, please correct me if so!)
@oligopsony:
Deiseach on the Chronicles of Wasted Time post.
It would be kind of bitterly funny to ask a hundred people in EA-adjacent communities about EA and collect all the “all X, all the time” responses.
Probably a fair assessment 🙂
It was more that, in a gathering ostensibly to discuss how to ameliorate the ills of the world by finding new and better ways to work with the donations people give and the choices they make, and which is not – so far as I am aware – a registered animal charity or one that has promoting veganism as a stated core objective, these well-wishing, ethical, high-minded people could find nothing more worthy of remark than blowing up a typical organisational cock-up into some kind of equivalent of deliberately stating “Fuck you, we love animal cruelty!” to the members.
All this over a menu that catered to meat-eaters and vegetarians, as well as outright vegans. Because they had been led to expect it would be an all-vegan menu.
The sense of entitlement and grievance gives me very bitter amusement, because if they can’t even manage to come to some kind of compromise “Okay, I don’t like this, but I’ll give in on this point for the bigger picture” with those of their own mind and ilk, how the hell do they expect to be the great world-changing force they aspire to be?
Because I’ve got news for you: there are a lot of people in the world that don’t think the same way about the same things as you do. And you can’t get rid of them by killing them all, so you’ll have to find some way to work with them and convince them that your way is better.
And ripping your movement apart over compulsory veganism is not the way to do it.
I didn’t take your posts as implying that you thought EA was all about veganism but rather that, since it wasn’t, the behavior you were describing was unreasonable.
The vegans– many of whom, I gather, work for or contribute to charities which fight animal cruelty– object to participating in a conference whose menu involves them in what they see as the slaughter and exploitation of animals.
The omnivores object to having to go a meal or two at the conference without having animal products provided to them.
It is deeply mysterious to me why the vegans are being intransigent schismatics here while the omnivores are totally blameless. This registers as a partisan attack, something like the Pakistani delegation to the Kashmir talks accusing their Indian counterparts of harboring an unreasonable sense of entitlement and grievance.
As someone said in the other thread, If someone else being an omnivore (or plain old vegetarian) is a trivial whim, then surely being a vegan is also a trivial whim.
In the end what we have are several people who’ve established through action that forcing their dietary whims on others is more important to them than participating in EA.
“In the end what we have are several people who’ve established through action that forcing their dietary whims on others is more important to them than participating in EA.”
By the same token, we also found several people who’ve “established through action” that the crushing burden of having to go a whole day or two without having exactly the food they want catered to them is more important than participating in EA. The vegans, at least, are acting on some kind of principle, even if you believe their principle is silly or misguided. But no one thinks we have a moral obligation to eat meat at every possible opportunity.
Deiseach linked a post where someone compared serving meat at the conference to cutting holes in bednets, which seems to be a drastically disproportionate response to what was probably just an organizational fuckup. Organizing things is hard! I agree that there is decent reason for an EA event to serve an all-vegan menu (provided all other dietary restrictions are also met; a vegan diet isn’t too hard when you’re buying food for yourself, but I imagine catering is more complicated), but the dustup over it makes EA look dogmatic and unwelcoming, which is not a good way for a fledgling movement to acquire new members. Especially given that the marginal value of a new EA is probably much greater than the harm caused by one non-vegan catering menu.
“By the same token, we also found several people who’ve “established through action” that the crushing burden of having to go a whole day or two without having exactly the food they want catered to them is more important than participating in EA.”
Not attending the conference != not participating in EA. I very much doubt that someone already involved in the movement will leave over this. But someone just beginning to consider it could very well be turned off.
I was under the impression that there was a substantial overlap between the vegan movement and the social justice movement; if so, the vegans should be familiar with the concept of “microaggression”
Because dictating someone’s diet when they are paying for the meal, I think constitutes at least a milliaggression and will be perceived as such.
Telling people that your actions are guided by the highest moral principle and theirs by a frivolous whim that must obviously give way to your high principle, that doesn’t get any metric prefix.
And that’s fine, because maybe it’s true. Even if not true, it is consistent. An ethical vegan should logically feel justified in aggression, micro-, milli-, or otherwise, against a carnivore. If you believe you are dealing with a murderer, you aren’t obligated to be polite. Analogies to other current political debates are left as an exercise for the student.
But, Effective Altruism. How is it effective to invite carnivores to your event, trivialize their beliefs, engage in minor aggressions against them, and then ask them to join your cause? Everything EA has to offer them, they can find elsewhere. And if you aren’t obligated to be polite to a murderer, it might sometimes be wise to do so anyhow.
AJealousMonk– Yes, I saw. The chain of events, as unitofcaring has it:
–The conference originally had a vegan menu.
–Omnivores complained, some for probably legitimate reasons (i.e. allergies), others because they felt entitled to be served meat at the conference.
–The conference organizers caved and included factory-farmed meat without notifying anyone.
–After the conference, the vegans were upset about the way this was handled.
Deiseach appears to blame the vegans and only the vegans for this little dust-up. I’m pointing out that that the omnivores are, at minimum, equally at fault.
“How is it effective to invite carnivores to your event, trivialize their beliefs, engage in minor aggressions against them, and then ask them to join your cause?”
By the same token, how is it effective to invite vegans to your event, promise them vegan catering, renege on that commitment because of the nakedly ideological demands of a few meat-eaters, and when vegans feel aggrieved by the shabby treatment, blame them for the whole affair?
I don’t understand why youse are having so much trouble perceiving the symmetry.
The vegans are the ones who made a big and ineffectual fuss about something that either wasn’t worth making a fuss about or should have been settled long before this conference was even conceived.
The carnivores, made a request. For reasons that I think have been mischaracterized here, but in any event I have seen nothing to suggest that they did anything more than make a request and perhaps say or imply “if my request is not granted I will not attend”. Possibly if their request had not been granted they would have done more than just quietly stay home. Possibly they would have done something blameworthy.
But they didn’t.
“should have been settled long before this conference was even conceived.”
It was, though. The conference organizers settled on a vegan menu, then changed their minds at the last minute.
But I’m really more curious to know whether you genuinely endorse this schema as a general principle:
If a conference plans to accommodate Sect A, Sect B gripes about the accommodation and gets their way, and Sect A feels slighted as a result and voices their grievance, Sect A bears 100% of the blame for any damage this does to the cause.
I cannot imagine that anyone has such strong intuitions about intragroup politics from behind the veil of ignorance.
I have seen nothing to suggest that they did anything more than make a request and perhaps say or imply “if my request is not granted I will not attend”.
When a kosher buffet is advertised, bacon and sausage on the table is … clueless, assholery, or enemy action.
Would you give equal credence to a requirement of celibacy?
After all, the crushing burden of having to go a whole day or two without sex is more important than participating in EA. The teetotalers, at least, are acting on some kind of principle, even if you believe their principle is silly or misguided. But no one thinks we have a moral obligation to have sex at every possible opportunity.
I’m not vegan or even vegetarian, but I have no problem eating a vegan meal. I even try to prepare them from time to time as a culinary challenge. If I was a guest in the home of a vegan, I’d have zero issue eating whatever vegan cuisine they prepared and wouldn’t expect them to prepare meat for me.
That said, a conference for a group not explicitly vegan announcing an all-vegan menu, with no accommodation for other preferences/needs, well, that sounds like a moral judgement. A moral judgement about an issue that’s controversial (well, not even that controversial – true vegans are a small minority). Veganism may well be an appropriate topic at such a conference – which is part of why it seems inappropriate for the conference organizers to treat the question as answered.
As an omnivore, I would definitely not feel welcome, even though I wouldn’t bat an eye if they simply offered a vegan option among other choices. It’s the difference between saying “we will be taking 5 breaks daily and rooms will be made available for personal reflection and relaxation” and “we expect all attendees to attend daily Muslim prayer sessions”.
I get that veganism often comes from a strong personal conviction. And I respect that. But I think it has to be paired with a realization that the vast majority of humans don’t share that conviction, so maybe they should be a wee bit humble about it and not expect everyone to conform to their view. The burden of proof is on them, no fair begging the question. Either that, or, if their convictions are truly that strong, cut themselves off from a pretty big chunk of possible social interactions.
The comparisons to celibacy and compulsory prayer sessions fall flat, because the conference organizers aren’t forcing you to eat their repast or policing the food you consume in your free time. If you get bored you can wander off and purchase a bacon smoothie or whatever, it’s up to you.
The analogy to kosher or halal is more on point. A conference with a substantial jewish or muslim contingent, as a show of respect, leaves pork off the menu. And, okay, maybe the chicken sausage is pretty lame, but is anyone going to see this as an unwarranted intrusion into their dietary choices? I’m pretty sure I’ve been in that exact situation before, and it would have been ridiculous for me to see it as a personal affront.
Several people here seem to be viewing animal welfare as a personal purity or spiritual exercise. Hence comparisons to kashrut, prayer or celibacy. That is exactly not what EA animal welfare is about. It’s about the animals. It’s not about the people who eat them.
I think “If you don’t like abortion don’t have one, but you still need to pay for other people’s” is a pretty good comparison. At least abortion advocates rarely go there.
Disagreeing with moral claims is one thing, but brushing them off like this is not civilized engagement.
It was, though. The conference organizers settled on a vegan menu, then changed their minds at the last minute.
If changing your minds is still on the table, then it isn’t settled. That’s what the word means. If it had been settled, we wouldn’t be having this controversy.
And, if Sect B “gripes” privately while the decision is still up for debate (and that’s not what that word really means), whereas Sect A bitches and moans in public after it’s a done deal, then yes, Sect A is the one doing damage to the cause.
This is something the military handles particularly well, and you all might want to look at how they do it. Because while we can debate about whether the military is altrusitic, they pretty clearly have something to say on the “effective” part.
John Schilling, it seems to me as though the vegans reasonably believed that the menu was settled, and they couldn’t have known that the conference-organizers would immediately cave to political pressure from the omnivores without some kind of telepathy. So it is in no way fair to blame the vegans for not making sure the menu was settled well in advance.
But let’s just run the scenario the other direction. Suppose that the menu was for months advertised as including meat, several vegans privately objected, and at the last second the conference organizers withdrew all of the entrees containing animal products. Afterwards, the omnivores raise a ruckus, while the vegans mock them on the internet for being spoiled brats. Would you really stick to your guns and find the omnivores wholly responsible, or would you and the other commenters here instead be exchanging dire warnings about the perils of vegan entryism?
You can’t say that carnivore preference are less important than vegan preferences, because vegan preferences come from a deeper conviction, and then complain when people apply a religious analogy.
And you can’t say “it’s for the animals, so suck it up for a day” when whether we should care about the animals is the very question at hand.
I don’t see why it’s so hard to understand that “I demand you accommodate my preferences by modifying your behavior” is more onerous and annoying than “I demand you provide me the means to engage in my preferred behavior in a way with no direct impact on you”.
There are few things in the world less onerous than not having meat provided to you at a couple of meals. As onuses go, it is a flyspeck. If this is a contest of annoyance or psychological distress caused, the omnivores will still have the weaker claim.
Suppose that the menu was for months advertised as including meat, several vegans privately objected, and at the last second the conference organizers withdrew all of the entrees containing animal products. Afterwards, the omnivores raise a ruckus, while the vegans mock them on the internet for being spoiled brats. Would you really stick to your guns and find the omnivores wholly responsible
Yes, I would, if the omnivores were making laments about “They changed the menu and I had to eat my beef stew in the same room as a woman eating dairy-free tarkha dhal at a table three tables over! I am so outraged and insulted! This was a deliberate slap in the face!”
Yes, I would point and laugh if the omnivores were making comparisons that providing vegan dishes as an option alongside the rest of the original dishes was the equivalent of “funding, enabling and officially sanctioning bednet cutting, patient-harassing, or other things in that vein.”
Yes, I would call them out on their lack of willingness to compromise, and how ridiculous they were being, if the one big cause they were most concerned about was “Some people who don’t share my tastes or values were accommodated in their dietary preferences”.
Yes, I would object if veganism was decried as immoral, sadistic, and selfish because some vegans wanted to be able to eat at the same time, in the same place, from the same menu, as other conference goers and had paid their money in the same fashion.
Yes, I would object if those vegans were being hassled with “If it’s sooooo important for them to impose their moral values on everyone else that they can’t go two days without beansprouts, let them buy their own food and cook it themselves! Or go off-campus to eat – there are loads of vegan restaurants in Mountain View!”
If I were to take the outrage at face value, I would be led to believe that the vegans had all the vegan food taken off the menu and were forced to eat non-vegan vegetarian food or go hungry, this on top of having paid for their meals and not being served what they expected. Instead, extra options were added to accommodate other people’s tastes.
I would suggest they avoid any Irish country weddings, where the choices on the menu are so reliably limited, they led to a racehorse being named Beef or Salmon.
You know, Deiseach, you’ve written now a novella’s worth of screeds attacking vegans, but at no point have you answered the question I posed to you several days ago: does your current mania for food choice extend to fetus steaks? How about monkey brains, dolphin flank, or labrador tail? You may have been lucky enough to be born into a society whose food taboos aligned with your own, or, to be less charitable, credulous enough to never question the dogmas about the sanctity of life inscribed in your catechism, but this does not excuse you from querying whether your avowed principles can be consistently applied to examples from outside the Overton window, or whether they are instead tools of convenience to be used to bludgeon your enemies but abandoned when it comes time for self-scrutiny.
I missed the information and am curious, was it changed so that all food options had meat or that only some of them did? I assumed it was the latter as the former seems crazy, but all the comments in this thread (with the exception of the latest by Deiseach) seem to be talking as if it was the former.
Surely some people can eat the vegan meals and others can eat the non-vegan meals and everyone is happy?
Edit: Never mind, it seems like my questions are answered in the next comment chain.
Earthly Knight, the vegans are assuming that Effective Altruism is set up as pro-veganism, or committed officially to veganism, or that there is a requirement that to be an Effective Altruist associated with this particular group, you must be a vegan.
The second assumption seems to be that the organisers had promised them an all-vegan menu (apparently they did) and when they switched with only two days notice, that this was some deliberate slap in the face or statement of intent.
Now, it seems that there is no Official Membership Requirement to be vegan, so the assumption that of course our movement is all-vegan is exactly that: an assumption. Animal rights or anti-cruelty or anti-factory farming may be part of the agenda, but it’s not (so far as I can make out) a mandatory position to hold.
Secondly, the leaping to grievance over what is fairly plausibly a screw-up or failure on the part of the organisers but not deliberate malice or anti-veganism statement is a bit rich, on the part of a group that is pluming itself on its superior ethical knowledge and conscientiousness.
Suppose the conference organisers had notified everyone at two days’ notice that they were going to provide translation services of all talks into Spanish. Suppose a sizeable proportion of the attendees then complained that this was blatant anti-Anglophone sentiment and evidence of disrespect for Anglophone values.
Wouldn’t our same EAs be lecturing people on the necessity for tolerance, for diversity and inclusion? And that by catering to the requests of a minority position, this was not some conspiracy to undermine the dominant position of the English language but flexibility in response to the preferences of others?
This is what needs to be sorted out right now, before this kind of nonsense (and it is nonsense in the context of a global movement to help better the world) can undermine the movement.
Is there a specific position on veganism? A bye-law? Is it part of the constitution or rules of association? Do the vegans want it to be such? Then discuss it and get it done. If there are such provisions, point them out to me, and I’ll happily eat humble pie and apologise.
If the menu had been scrapped so there were no vegan dishes on the menu, I’d be sympathetic to the complaint. If it was “We only had vegetarian dishes, and some of those had dairy or egg or fish in them”, yes they’d have a point.
But they’re complaining that they had to share a menu and a dining room with people who were non-vegans (that includes vegetarians who consume some animal products as well as carnivores).
I’ve got news for you, people: you’re currently having to share an entire planet with people who are non-vegans. I’m sorry you feel your safe space was violated, but there were no explicit promises or guarantees that the Effective Altruism Global movement is officially vegan and requires veganism on the part of all its members. You were promised an attempt to accommodate your preferences, and that fell through for whatever reason.
You are coming across as whiny cry-babies, to be frank (and if I get banned for saying this, I’ll understand). Your veganism is more important than people dying from malaria or living in poverty or not having access to clean water or education or polluting industries or the risk we’ll all be destroyed by a god-AI.
You’re privileged. You can afford to choose to be vegans, because your choices are not between eating whatever the hell you can get your hands on or going hungry. And yes, the carnivore/omnivore EAs at the conference are every bit as privileged as you are. But the lot of you could not agree to share space for three days without ideological war on this point – how the fuck are you going to change the planet?
@Deiseach
Fighting animal cruelty is, at the very least, one of the biggest parts of the EA movement. If you remember, these are utilitarian vegans, so it doesn’t matter if they or someone else is eating the meat, all that matters is that factory farming is being funded by a group that is at least somewhat committed to ending it.
To the animal cruelty side of EA, this is EXACTLY like cutting holes in bednets, ruining worm medications, or spreading misinformation about AI risk.
It’s fine if you disagree about the utilitarian value of animals, but every member of EA should be able to have applied the counterfactual of “what if this was the cause I believed to be most important?”
@PSJ
And I would disagree strongly with that position. At which point there is nothing much to say.
Deiseach, you’re avoiding the issue, which is that the omnivores who complained about having to go a meal or two without meat (for reasons other than allergies) are in every way guilty of the same pettiness and entitlement you’re accusing the vegans of.
PSJ: The problem with your argument is that, for almost any decision of the sort discussed, either alternative can be seen as anti-utilitarian.
If your objective is to maximize total utility of animals, then the vegan (or vegetarian) position depends on the assumption that the total utility of an animal raised to be eaten is negative. If you believe it is positive, then it is the vegetarian who is decreasing total utility of animals, since reducing the demand for meat results in fewer animals existing.
So if your position is that the conference can support no activity which some members of the organization believe reduces utility, there is little or nothing that the conference can do.
@Earthly Knight
No, the omnivores were demanding they get to eat what they want. They had no problem with vegan options being served to vegans. The vegans were demanding the right to decide what everyone got to eat. I think it’s generally recognized that demanding the right to restrict other people’s choices shows much more entitlement than demanding nobody restricts your own choices.
And as other people have pointed out, the message that a vegan menu sends is more important than having to eat a vegan menu. It’s sending a message that you consider a controversial issue settled, and don’t respect people who disagree.
In an earlier comment you made an analogy to the dispute India and Pakistan have over Kashmir. Let me extend that analogy: Having a vegan menu at an EA conference is analogous to having a conference between India and Pakistan about the fate of Kashmir, holding the conference in Kashmir, and then flying the Indian flag from the building the conference is in. By contrast, allowing people to choose between vegan and non-vegan dining is equivalent to flying no flag, or flying both flags equally high. (And obviously having no vegan options would be analogous to flying just the Pakistani flag).
Your attitude reminds me of those Christians who get horrifically offended at being wished “Happy Holidays,” and also can’t see why anyone would be offended by being wished “Merry Christmas.”
Let this entire thread serve as a reminder to all of you that when somebody claims to abjure ideology in favor of “what works,” it’s a trick. There’s no getting around foundational questions.
@ Ghatanathoah
Kashmir is a sensitive issue to both sides. Meat is a sensitive issue only to animal-welfare-motivated vegans and vegetarians.
Being told “You are a murderer” is a very sensitive issue to, well, pretty much everyone.
It is possible to express vegan ideals to a carnivore without making that accusation, but it requires some care. I mostly see vegans being the opposite of careful in that regard, and this is no exception.
No. The omnivores don’t want to force the vegans to eat meat. The omnivores aren’t trying to police the vegans’ diet in a shared space. The pettiness is not equivalent.
“The vegans were demanding the right to decide what everyone got to eat.”
No, they weren’t– omnivores were free to bring or purchase meat on their own if they were so inclined. There just weren’t any items on the menu containing animal products. Not having meat provided to you is not the same as someone imposing restrictions on your dietary choices.
“holding the conference in Kashmir, and then flying the Indian flag from the building the conference is in.”
Does the Indian government ever fly a Pakistani flag from buildings in the Indian-controlled areas of Kashmir? I’m not so sure. I also expect the Pakistanis are polite enough not to serve beef, and the Indians polite enough not to serve pork.
And some of us pre-commit to not being coerced by mere outrage in order to avoid being manipulated into arbitrary actions (and also to avoid incentivizing outrage in general).
Usually what happens is that there a plenty of no-beef and no-pork options, and anything with beef is clearly labelled as such, same for pork.
I don’t know where you go to eat, but it sounds a hell of a lot less fun than where I do.
InferentialDistance, I routinely attend conferences where no pork or shellfish is served because the kosher contingent will kvetch, no meat is served because the vegetarians will grouse, no alcohol is served because the feminists will go into hysterics, no gluten is served because people who probably don’t have Celiac disease will become bloated with rage, and no peanuts are served because– well, because it would look bad if any of the attendees keeled over from anaphylactic shock. Then everyone goes out to the bar afterwards and orders whatever they like. It’s really not a big deal, it just takes a modicum of tolerance.
“They had no problem with vegan options being served to vegans.”
I should mention that menus not specifically designed to cater to veg*ans seldom include actual veg*an-friendly options, and when they do, it is invariably a portobello-mushroom sandwich. I am so sick of portobello sandwiches. Nor does anyone ever remember to mark the desserts which contain gelatin. But these are just the routine inconveniences of voluntary diet restrictions, and– omnivores take note– this is literally the first time I’ve ever spoken about them.
No, they weren’t– omnivores were free to bring or purchase meat on their own if they were so inclined.
Were they, in the original conception, offered a discount or refund for that portion of the conference fee that covered the catering?
I saw in unitofcaring’s post a suggestion that poor carnivores might be offered a meal voucher. I doubt that was ever seriously contemplated, but no matter.
As a not-poor carnivore, that tells me exactly what role the leadership and consensus of EA sees for me to play. And it is that clumsy signal of that contemptuous truth, rather than the mere content of a meal, that leads me to conclude that my involvement with EA should be limited to ridiculing them for how unworthy they are of the “E”.
You all could at least have been a group of effective vegan altruists. Now, you can’t even manage that.
So, let me get this straight– you are forswearing participation in effective altruism, even though the leaders of the movement acceded to the demands made by your tribesmen, because some random commenter on the internet who you’ve mistakenly assumed is involved in the organization had the gall to suggest that the enemy tribe was not 100% to blame for the kerfuffle? Have you considered that it is the lack of self-awareness and all-or-nothing mentality you exhibit which is the real reason why all do-gooder movements are fated to descend into internal squabbling and schism?
There are many ways a compromise could be reached on this issue, but I suspect that all of them start with acknowledging that the vegans do not bear sole responsibility for any fallout which ensues if they do not surrender unconditionally to your obviously correct moral judgment.
> Fighting animal cruelty is, at the very least, one of the biggest parts of the EA movement.
Do the non-vegans in EA agree with you on this?
Did you ask them?
@HlynkaCG
What specifically in my argument do you disagree with? I don’t think it requires somebody to agree with the vegans in order to extend a principle of charity given that the group does explicitly oppose factory farming.
I think the argument would work for the other three major EA causes regardless of whether I agree with them.
@David Friedman
Given that EA writ large opposes factory farming, I think the burden of proof is on you to show that it is ineffective to boycott such foods. Furthermore, I seriously doubt that even a significant number of the omnivore members would argue that veganism is anti-moral, so this doesn’t seem relevant to the specific question about the EAG conference being addressed.
@ Earthly Knight
I routinely attend conferences where no pork or shellfish is served because the kosher contingent will kvetch, no meat is served because the vegetarians will grouse, no alcohol is served because the feminists will go into hysterics, no gluten is served because people who probably don’t have Celiac disease will become bloated with rage, and no peanuts are served because– well, because it would look bad if any of the attendees keeled over from anaphylactic shock. It’s really not a big deal, it just takes a modicum of tolerance.
Thanks for a report on common sense working in real life. I’d like to see their cookbooks, and reviews from their customers. Their lab-kitchen must be a lively, diverse place!
I don’t know how the hell I seem to have gotten myself into the position of “Give the vegans a good kicking”, but here goes anyway.
Earthly Knight, I’m not blaming the vegans for being vegan, I’m saying the whole thing was a storm in a teacup which was blown up into a silly spat which has the potential to introduce a split in the movement, and splitting a fledgling movement which is, apparently, finally starting to get some notice outside the Usual Suspects is a bad idea for promoting altruism effectively and how to be effective in your altruism. Nobody was forcing the vegans to eat non-vegan (even vegetarian rather than vegan) meals. Effective Altruism Global is not a vegans-only organisation, so vegans expecting to get a 100% vegan all the time menu was expecting to have their preferences accorded special treatment.
John Schilling, your point is good but I would like to add that they (the conference organisers) did not invite carnivores as some kind of special-interest minority. They invited attendees who were supporters of, members of, and interested in Effective Altruism Global to a conference. Some (I have no idea how large or small a minority) of those people are omnivores/carnivores. Accommodating their preferences, when it was not done at the expense of refusing or denying the vegans anything other than a 100% vegan menu, does not seem to me to be excessive.
It was a cock-up. But we don’t know why or how it happened (anything from “caving in to meat-eater terrorist threats!!!!” to “oops, we screwed up costing how much all-vegan catering would cost versus how much we charged attendees”) so leaping to the conclusion that this was a deliberate insult aimed at vegans does seem to me to be excessive.
If anyone can point me towards anything stating that Effective Altruism Global is officially vegan and requires all members to make a commitment to veganism, kindly do so. Otherwise, if anyone (including gross disgusting meat-eaters) can sign up, then it’s not your pet cause organisation. That is something that seems to me to need sorting out very fast, before something else of this nature kicks off.
I’m also deriving endless amusement from the sight of the kinds of people who I imagine would bend over backwards to be scrupulously careful about getting someone’s exact pronouns right and expect everyone to extend the same level of careful self-vetting of speech for a tiny minority of nonbinary persons who might or might not attend, but who seem to think that having meat and non-meat dishes on the same menu, and people in the same room as them eating meat, during mealtimes for the entire conference-going audience, is along the lines of declaring the Third World War with vegans as the Untermenschen grouping this time. How’s that for tolerating diversity of opinion and accommodating a minority group?
I asked you this the other day, but you never really answered the question. Suppose you are attending a professional conference in some god-forsaken corner of New Guinea where cannibalism and abortion and cannibalistic abortion are facts of life. Being savvy and careful, you solicit assurances beforehand from the organizers that no human remains will be served at the reception. But when you show up at the conference… whoops! There’s been a cock-up! The cannibals complained that you were shoving your dietary restrictions down their throats, and fetuses are back on the menu. What do you do? Do you sit there silently respecting their preferences, trying not to gag while the other conference-goers chow down on their grisly banquet? I very much doubt it.
The only difference between these two cases I can see is that you consider your dietary preferences to be sacrosanct and the vegan’s dietary preferences to be loony and wrong. And that’s okay, but you should make clear that your real objection to meatless menus is that the vegans are mistaken about the value of animal life and welfare and so we shouldn’t listen to them. No principle more abstract is being invoked.
This a slight variation on the distress of the privileged. The world around you conspires to ensure that you are always able to eat bacon if you want it but never have to be in the same room as a cannibal, and is so successful at doing so that you do not even realize that you have dietary restrictions, let alone that you would throw an apoplectic fit if you could not impose them on others at all times. Vegans do not have this luxury. Not being coerced into subsidizing the slaughter and mistreatment of animals is the most they can hope for. But you see even this small compromise as an unforgivable encroachment on your privileges.
Reading that survey, and your figure/ground illusion point, gave me a great sense of clarity about all this.
Emphasis mine.
I’m pretty sure I see where Dylan’s complaints are truly coming from.
Animal rights EAs get really, really bothered when you disagree with the importance of their cause. I’ve also disagreed with AI / x-risk EAs and global poverty EAs, who seem to have at least as much right to get emotional – yet they don’t. (This isn’t a strike against the moral character of animal rights EAs – I have my things that I get emotional about disagreements over too, and they’re a sight less morally compelling than animal suffering!)
But this is an experience I’ve had several times. It resonates with descriptions of your interactions with neoreactionaries compared to your interactions with social justice types – NRxers being remarkably open to reasonable discussion and cordial disagreement despite how important they think their cause is; social justice being remarkably prone to jerkishness, bad epistemology, and emotional appeals because of how important they think their cause is.
What I think is happening, is the animal rights effective altruists are getting mad that their more-emotional appeals aren’t getting equal table time alongside the more-logical appeals of AI, x-risk, and global poverty. And it is the animal rights EAs making the most emotional appeals – global poverty EAs could speak about the mass graves of skeletally-thin Indian and African children just like how animal rights EAs speak of the other EAs shovelling burned, mutilated animal carcasses into their mouths – but crucially, they aren’t. AI risk EAs throw around large numbers, but those large numbers are supposed to be convincing on a logical level, not an emotional one (contrast with the “millions of slaughtered chickens”, etc).
The Vox article is very ardently pro-‘focus on animal rights’. After listing several interventions that saved animal lives in a quantifiable manner, it literally says “This is exactly the sort of thing effective altruists should be looking at … he was also helping make the case that EA principles can work in areas outside of global poverty. He was growing the movement the way it ought to be grown…”
There’s an obvious split in Effective Altruism’s reaction to the article, with some people endorsing Dylan’s piece as a very important and accurate critique of EA, and others taking issue with its shoddy argumentation and politely expressing their disapproval. I have not done a rigorous survey but I suspect this split divides almost entirely along animal rights lines.
I think the animal rights EAs are hurting their cause by endorsing the Vox article. Their willingness to embrace bad critique because it agrees with their preferred direction for Effective Altruism feels like too much heart, not enough head. I think that’s going to sour relationships with a lot of EAs who aren’t concerned with animal rights, but are happy to eat vegetarian at summits. At the end of that slippery slope is EffectiveAltruism+.
Related – the statements about at AI risk EAs in this post suggesting more politeness seem to be better advice for animal rights EAs than AI risk EAs.
I just want to say that all of the animal rights EAs I know are very nice people, make excellent and very logical appeals for their positions, and that I haven’t had anywhere near the bad experiences with them that you have.
The one-sentence summary of my position on animal suffering is “animal suffering has moral weight to the extent that it makes humans sad that animals are suffering”.[1] I don’t evangelise this view, but I don’t misrepresent my views as being more animal-rights-friendly either.
If you hold a significantly more acceptable-to-animal-rights view, I don’t think you would’ve had the experiences I’ve had. (If your view isn’t significantly closer, then I notice I’m confused.)
1: I recognise that animals suffer but I don’t think animals are intrinsically morally important. All else equal, factory farming should not happen – I think it should stopped by inventing something that is equal or better (and also more humane), thus obviating any need for factory farming. IE, all else is not equal – factory farmers are not like the antebellum American South government, preventing the economically and morally superior treatment of slaves on a racist whim; factory farmers are more like the agriculture vs hunter gatherer split, where agriculture wins because it’s more efficient even though it’s morally worse.
Hm, I’m not sure I’d say that someone torturing a dog with electrodes to it’s testicles for fun or skinning a cat alive to pass the time is morally neutral as long as nobody finds out about it.
Personally I’m not quite sure how to state it totally clearly but I feel that unjustified animal suffering should be reduced anywhere where it doesn’t involve a quite significant increase in human suffering.
Though I think it’s more complex than that, I have a gut feeling that making a species extinct may be worse than some very non trivial amount of human suffering. I’d feel no guilt about pushing a truck full of 20k mice off a cliff to save a human child but not if they were the *last* 20k of mice. (aversion to information loss?)
Murphy, believe me, if there were 20k mice, no way they’d be the last 20k mice. Heck, save five breeding pairs out of your truck full and you’d build the global population back up within a few decades*:
*Note: mathematics not worked out because I’m crap at maths. But if under optimal conditions a female mouse can litter 10-12 pups per month all year, and you have 5 breeding pairs littering 50 pups per month, that’s 600 new mice in a year, which we’ll assume for the sake of mathematical neatness gives us 300 new breeding pairs which will all start littering themselves as soon as they’re three months or so old – so how long would it take you to get hip-deep in mice?
(apologies for the double post; edit link expired)
I’m not the only one that has had that experience. I am likely overstating the case and they are more moderately expressing these concerns.
Are they EAs who’ve gone through EA to animal welfare, or are they animal rights activists who see the EA movement as fertile ground to promote their pet issue, without a particular affinity for the other issues/ideals of EA? I suspect the latter to clash with the culture a bit more, obviously.
I think that dichotomy is fairly common in other movements as well, and speaks to the main issue that framed the post. Sometimes minority subsects do seem to have outsize influence. Sometimes the minority subsect contains a lot of single issue members that are more interested in making that issue the official position of the large group rather than promoting the large group’s interest in general.
Consider the Communist party showing up at a pro-increased minimum wage rally. It’s unlikely that the majority of the rally goers are Communists, but you can be sure the Communists do their damnedest to leverage the overall popularity of the rally to push their own agenda. And I think a non-Communist rally attendee can reasonably be irked by this.
I came to animal welfare through EA. Since EA is a utilitarian group and the moral weight of animals is an important utilitarian question, it would be very surprising to NOT see a lot of animals rights activists there.
People in the SSC comments seem to be assuming the animal cruelty side of EA is like PETA in making emotional arguments or trying to force other people through coercive tactics. This has not been my experience.
It also feels weird to presume entryism on the part of animal rights but not on the other major issues.
I’m not presuming entryism by animal activists alone. My whole point was that it is very common across many different movements. Heck this post was originally about whether AI riskers are engaging in it unfairly.
I don’t have a dog in this fight.
And I’m not sure that people are presuming that all EA animal activists are annoying PETA activists (didn’t my post allow that there’s a second type that seems to fit your preferred description?) so much as reacting in particular to the incident of some EA animal activists apparently throwing a snit and leaving or threatening to leave the movement because other people dared to eat meat in their presence. Which does sound like an annoying PETA activity.
Anecdata: I’m a poverty/x-risk/animal EA and I basically 100% endorse Scott’s responses, especially this post.
That seems like motivated reasoning on your part (although I am not sure I can defend that statement),
I would say the two key pieces that encapsulate the problem he is trying to identify are:
There is a certain hubris involved when an event can be summarized as saying “we will solve all of today’s problems, which are really just a rounding error anyway”
From the cynical Irish view of all such new movements:
“The first thing on the agenda of any new Irish party is The Split”.
I could see the animal rights people splitting off – unitofcaring’s post even said “I know people who felt like it was a way of saying “we don’t take your work at all seriously; we won’t even pretend for the length of lunchtime that there’s any merit to it”. Some of them lost faith in EA and drew away from the movement. But they mostly kept quiet about it.”
Now, to me, that’s taking it too seriously. But I’m not a vegan, vegans may indeed feel as strongly about meat-eating “for the length of lunchtime” as I feel about PZ Myers’ alleged desecration of the Host (I say “alleged” because I don’t know and really hope it wasn’t consecrated).
Myers thinks I’m an idiot too far beneath contempt to even engage for worrying about a piece of bread. I think the vegan EAs in that post are not idiots, but need to dial down the sensitivity a little because overwrought talk about not even respecting people for the length of a lunchtime is over-reacting to what was not a deliberate insult but a screw-up of the kind that happens when you’re trying to organise a big event with very little experience (I’ve been clerical support to people tasked with organising major conferences. There are so many ways snafus can happen that do not involve malice or deliberate intent).
Myers and I may be both equally objectionable poopy-heads who need a good kick in the pants 🙂
As an aside: if unitofcaring and others think “But we’re Effective and Ethical Altruists! This was held on the Google campus! We should be able to run everything like clockwork perfectly according to our every desire and preference because we’re smart and well teched-up!” – well, welcome to the real world, lads 🙂
So the group consisting of people that value themselves on perceiving the world-as-it-is held a meeting they knew would have lots of vegans…..and then were surprised when vegans act as vegans are well known and well documented to behave?
This…..this doesn’t speak well of their capacity for observation and projection, to be completely honest. I mean….anyone who has met a vegan for five minutes knows this would happen.
“We should be able to run everything like clockwork perfectly according to our every desire and preference because we’re smart and well teched-up!””
What worries me is that the people claiming this aren’t either college students who just discovered something more complex than “See Spot Run” or uneducated peasants. But like- grown ass adults with jobs and taxes who have presumably had time to notice that literally every time this is tried, it fails miserably.
It’s part and parcel, I think, of two smart people fallacies- one, that their evidence and arguments and positions are clear to any intelligent person- thus an intelligent person that disagrees with then leads to endless hair splitting instead of points of common compromise.
The second is the assumption that because you’re smarter than most people, you’re smarter than all of them. Thus those morons who wrote Robert’s Rules and Parlimentary procedures and standards for good conduct were just a bunch dum-dums that didn’t know shit about sorting algorithms. Ignoring, of course, that those standards evolved for a damn good reason- namely- shit like this.
(T)hen were surprised when vegans act as vegans are well known and well documented to behave?
CJB, we should hire ourselves out as consultants to these kinds of things. I’ve got a vegan/animal rights campaigner brother, I know exactly how they’ll react. Also, I work in the public service/local government, so I know how crap happens and how things go tits-up when you’re organising a conference.
You sound like a person who also has experience of what happens when reality hits theory over the head, kicks it in the ribs when it’s lying on the ground, steals its wallet and mobile phone, and drives off in its car on the way to steal its girlfriend. Charge consultancy day-rates (per-hour billing naturally), we’ll make a mint! 🙂
I can imagine people expecting better of EA vegans.
You don’t expect people working at the Vatican Observatory (in modern times) to act the same way at a conference on the formation of stars as you’d expect a crowd of young-earth creationist lay preachers from the deep south.
Both may be christian but the former you’d expect to be more rational.
I read that Vox article and got very angry about one of the animal rights charities that Matthews was praising. He speaks in glowing terms about efforts to get rid of sow stalls.
I used to work in a piggery. Sow stalls are a very literal Chestertons Fence. Animal rights people see them as cruel and campaign to get rid of them without ever taking a moment to ask themselves why the farmers put them up in the first place.
The answer is simple. They stop baby pigs from being crushed by their mothers.
Piglets are tiny. Sows are huge. It’s very easy for her to accidentally lay on top of them, and once that happens it’s lights out for piglet.
But the good news is that piglets grow very fast! If you just protect them by restricting the sows movement for a couple of weeks they are old enough to be weaned. Then the sow can be reimpregnated and allowed to frolic freely in big open pens with lots of straw till she is ready to give birth again.
Removing sow stalls removes a short term inconvenience for one pig at the cost of painful deaths for others. And it really pisses me off that the only quantification these people are willing to do of their efforts is to make sure they’re being really *efficient* in their counter productivity.
Animal rights activists just don’t understand the animals they claim to love or the farms they say abuse them. Farmers have strong financial incentives to treat their animals well! A stressed dairy cow gives less milk. A sick pig grows slower.
The difference is that farmers have to face up to the existence of trade offs. Mulesing is gruesome and violent – but it prevents fly strike which is much much worse. Animal rights activists take no responsibility for the unintended consequences of their campaigns, so they get to feel all smug and virtuous while they needlessly and horribly kill countless animals through their efforts.
“If you just protect them by restricting the sows movement for a couple of weeks they are old enough to be weaned.”
According to wikipedia:
Between 60 and 70 percent of sows are kept in crates during pregnancy in the United States.[9] Each pregnancy lasts for three months, three weeks, and three days. Sows will have an average of 2.5 litters every year for three or four years, most of which is spent in the crates.[10][4] They give birth to between five and eight litters before being slaughtered. As they grow larger, they no longer fit in the crates, and have to sleep on their chests, unable to turn around to lie on their sides as pigs usually do.[11] The crates are usually placed side by side in rows of 20 sows, 100 rows per shed. The floors are slatted to allow excrement and other waste to fall into a pit below.
“Farmers have strong financial incentives to treat their animals well!”
I am sure this is solely for the benefit of the chicken.
I’m at least skeptical that there exist animals who can’t manage not to accidentally kill their own young without having their movement restricted by their benevolent caretakers.
To be fair, pigs have been artificially selected for so long and are raised under such unnatural conditions that the evolutionary mechanisms which once prevented them from crushing their newborns might no longer function properly.
But it’s extremely difficult for me to see how packing chickens into cages like tetris blocks could possibly serve the welfare of the chickens. Like, couldn’t you just have one or two chickens per cage, instead? How could that possibly be worse?
As someone who grew up on a farm- I have a lot of problems with factory farming.
But animal rights activists are pretty much universally fucking morons who make the jobs of responsible farmers harder.
“Benefits of battery cages include easier care for the birds, floor eggs which are expensive to collect are eliminated, eggs are cleaner, capture at the end of lay is expedited, generally less feed is required to produce eggs, broodiness is eliminated, more hens may be housed in a given house floor space, internal parasites are more easily treated, and labor requirements are generally much reduced.”
The trade off is that some hens do have problems. Some free range chickens get eaten by coyotes. The “less than a SHEET OF PAPER” argument doesn’t hold much water, however, for people who see how chickens actually behave- they’re flock creatures that tend to crowd very, very closely together.
Also, free range chickens, or chickens with unrestricted movement, will often start “pecking parties”- for example, one chicken gets scratched, another starts pecking at the scratch, gets some blood on her, gets pecked in turn, until your entire flock gets wiped out.
Lets see- hogs also have a nasty tendency towards eating their own young besides crushing them. Milk cows NEED to be milked or else they suffer.
Veal- veal I’m opposed to. I’ve never eaten it, and I don’t want too.
But I’ve never seen a “LOOKIT THE POOR BABY ANIMUL” arguments that didn’t have a sensible answer when you looked into it, like, at all.
“Benefits of battery cages include easier care for the birds, floor eggs which are expensive to collect are eliminated, eggs are cleaner, capture at the end of lay is expedited, generally less feed is required to produce eggs, broodiness is eliminated, more hens may be housed in a given house floor space, internal parasites are more easily treated, and labor requirements are generally much reduced.”
Note that all but two of these are effectively complaints that not using battery cages will cut into the farmer’s bottom line, which is an excellent argument for regulation or veganism, but a poor defense of battery cages.
The only relevant question in the vicinity: is it better for the welfare of the chicken that there be one or two per cage, or ten? You grew up on a farm, what do you think?
Of course, if we’re going to say that MIRI makes more sense than treating current poverty because billions of potential future lives are more important than any current suffering, then certainly tiling the landscape with factory farmed chickens is superior to everyone going vegan and the domestic fowl going extinct?
> According to wikipedia:
And I’m sure Wikipedia knows way better than actual farmers.
Read the comment (at present) immediately below this one. Nathan was apparently unaware that gestation crates existed until just now. Also note the irony here: Nathan was defending the methods used at his piggery, which evidently got by just fine without the gestation crates that animal rights activists regard as cruel.
“And I’m sure Wikipedia knows way better than actual farmers.”
I think the general case should be that practitioners know less, on average, about the overarching issues than academics studying the issue, and that narrow academics know less than broad meta-analyses.
(Further, as a semantic trigger, I’ve noticed that people use of the word “actual” is a weak signal for “I’m not interested in thinking about this issue clearly.” However I admit that this is a pretty weak signal and may not apply to this case)
“I am sure this is solely for the benefit of the chicken.”
This should not be a strong argument against a program, that people adopt it for selfish rather than saintly reasons. In fact, I’d argue that’s the whole point of good policies and mechanism design, to have incentives in place that it’s *easy* to do the right thing.
If you’re optimizing for the welfare of chickens, you should, well, optimize for the welfare of the chickens, not casting aspersions on how practitioners in the field are not moral paradigms.
Earthly Knight: That quote is wrong. Think about it. If the sow can’t lay on her side, how would her piglets feed? What farmer is going to make money from that?
I try not to comment on chicken welfare issues as I’ve never worked with chickens. But my strong presumption from the wool, pork and dairy industries is that farmers really do go to great lengths to maximise the welfare of their animals and even apparently horrific practices are a part of that.
The quotation describes gestation cages, not the farrowing cages which sows are moved to immediately before giving birth. Sorry I didn’t include the link:
https://en.wikipedia.org/wiki/Gestation_crate
“But my strong presumption from the wool, pork and dairy industries is that farmers really do go to great lengths to maximise the welfare of their animals and even apparently horrific practices are a part of that.”
My strong presumption from being familiar with capitalism is that all of these industries maximize the welfare of their animals exactly insofar as it is profitable for them to do so and treat them with inhuman cruelty wherever it saves a red cent.
“My strong presumption from being familiar with capitalism is that all of these industries maximize the welfare of their animals exactly insofar as it is profitable for them to do so and treat them with inhuman cruelty wherever it saves a red cent.”
My actual experience suggests quite the opposite.
What, precisely about liking free markets makes people so inhuman again?
There’s a quote somewhere that homo economicus is: “a monomaniacal sociopath who can wander through an orgy thinking only about marginal rates of return”.
That seems to be the critter you have in mind here.
I’ve also been involved in farming and I can tell you that we definitely pack in animals much closer than they’d want to be naturally. Some of the behavior you’ve described in this thread (sows crushing or eating their offspring, chicken pecking parties) are stress behaviors. A lot of innovation in farming has been designs that allow the animals to stay somewhat safe GIVEN HOW DENSE WE ARE PACKING THEM.
Visit a ranch raising a few animals for non-commercial, personal consumption and it’s a night and day difference in behavior.
Being embedded in markets, not liking or disliking them. I imagine the average libertarian is about as decent a person day-to-day as the average communist, fascist, left-liberal, &c.
Nor is such behavior inhuman! Wherever you go, there you are. (Although firms are probably the more relevant unit of analysis than people as such.)
“My actual experience suggests quite the opposite.”
I value your firsthand experience, but I’m also mindful that anyone raised on a farm has a strong psychological incentive not to view farmers as greedy, sadistic torturers, so I don’t think anyone should trust a word you say without corroboration. Sorry.
“What, precisely about liking free markets makes people so inhuman again?”
Markets function as sieves which ensure that businessmen willing to sacrifice scruples for profit rise to the top. It’s just their nature. If farmer Jones sincerely cares about his flock and limits his chickens two to a cage, while farmer Steve has no such qualms and packs them in until they scarcely have room to breathe, saving a few cents per chicken in the process, farmer Jones will be out-competed and ultimately go out of business.
(This is still, in truth, a hopelessly naive view of contemporary farming. There are virtually no autonomous Joneses and Steves out there anymore, just major agricultural corporations and their obedient debt-slaves, and you probably believe what you believe not on the basis of firsthand experience but because you’ve been exposed to an endless stream of propaganda manufactured by said corporations, who prevail upon corrupt politicians to unconstitutionally restrict opposing views.)
“greedy, sadistic torturers”
“obedient debt-slaves”
And then we had the commenter going on about chickens being “living, loving, experiencing beings” while dismissing “pro-foetus people” because a chicken is knowable, a foetus is not.
Anyone over the age of six who can use “loving” about what a hen feels or does not feel in the tiny amount of cortex it possesses, and is apparently unaware that they are relying on the halo effect of such virtuous words to persuade us that a hen has volition, intention, and sapience in any meaningful fashion, should not be allowed anything other than a plastic spoon to eat their corn mush with as the risk that they will inadvertently poke their own, or someone else’s, eye out is much too great.
If a chicken can meaningfully be said to love, in a way comparable to usage of the term when applied to humans, then a chicken can equally be said to be racist. I have evidence of two hens bullying, physically attacking, and harassing to death a hen of a different breed. If I am expected to do those hens the courtesy of granting them the capacity to love, I am equally demanding the reciprocation of admitting they have the capacity to hate, are responsible as “living, experiencing beings” with volition and sapience for their actions the same as a human would be expected to take responsibility, and are suspectible to being penalised or even executed (death penalty for murder!) as humans are.
Otherwise, shut up because you are not making a serious argument, you are indulging in Disney fantasy anthropomorphic animal wankage, and not treating wild or domesticated animals as animals with their own instincts and proper lives, instead of mini-humans in fur and feather suits.
Earthly Knight, I am not thrilled about agri-business. I think the way hedgerows have been bulldozed, the run-off of effluent and fertilisers into ground-water and rivers, the monoculture and spraying of pesticides and herbicides, the way any subsidies are immediately taken advantage of to, for example as round here, pack fields with sheep where never sheep were kept before – all this is bad.
But if you are going to call people mindless profit-driven sadists, then I am going to unequivocally nail my colours to the mast and be on the side of the meat-eaters. People like to eat meat. And you know what? Just liking it is reason enough! People are entitled to eat what they like.
You like eating vegan? I won’t insist you eat pork sausages if ever you’re at my house. I like lamb curry, I’ll eat it. Neither of us get to object to what is on the other’s plate as long as we’re not trying to force it down one another’s throats.
And something else: no, I don’t consider animal suffering to be on a par with human suffering. No, I don’t consider animals to be on a par with humans, I think using phrases like “non-human animals” is being too cutesy for words and it does not impress me or convince me. I think animals do not have the same rights or moral worth as humans do, and if animals rights compete with humans rights, the humans win.
Should animals be caused unnecessary suffering? No. Should even necessary suffering be reduced? Yes. But you know what? I know what an abattoir is like (we had a class tour to one when I was training to be a lab technician – they often combined Ag Science and Bio/Chem Science classes for these field trips). I know what slaughtering involves. I’ve seen carcasses being prepared. I grew up when butcher shops had sawdust on the floor to soak up the blood. I know exactly where and how the meat on my plate comes from.
I. Don’t. Care. I’ll. Keep. Eating. Meat.
If pro-choicers feel so passionately about abortion rights and supporting Planned Parenthood no matter what new revelations come out, so do I feel entitled to the same level of acceptance about my right to eat meat when and if I want to. Not even if I need to, I want to (the same way others declare the absolute right to abortion on demand without needing any particular reason other than the wish of the woman to have an abortion).
Now, if we want to try and have a civilised conversation about factory farming and agri-business, sure. But if you’re convinced I’m a greedy sadist, I don’t mind in the slightest. And your bad opinion of me will not change my mind. Call me as many names as you like, it’s only your breath you’re wasting.
Morrissey sang “Meat Is Murder” when I was a teenager/young adult. I loved The Smiths, agreed with everything he said, and still continued to eat meat.
Oh, the cruelty? Yes. And I don’t care, especially even less when I’m being hectored and insulted and shamed into admitting my gross, horrible sins.
Whereas you seem to have zero direct experience with farming of any sort. Does that not cast some doubt as to the state of your information?
This is false. Both large, factory farming enterprises, and small-scale, inefficient, family-owned farms exist in no small numbers – the latter in large part due to agriculture subsidies.
@ Deiseach
You don’t like abortions? I won’t insist you take abortifacients if you’re ever at my house. I like* killing fetuses, I’ll do it. Neither of us get to object to what is in the other’s uterus as long as we’re not trying to force it down one another’s throats.
* not really, but I can pretend for the sake of symmetry 🙂
Also, we’re not bacteria, archeae, plants, fungi, slime molds, oomycetes, algae or unicellular eukaryotes. I’m afraid “animals” is the only slot left.
And finally, I don’t think Earthly Knight meant to call all farmers greedy, sadistic torturers. I think they merely pointed out that anyone with warm memories of a particular farm would be strongly motivated to reason away from any negative conclusions about farming in general.
Humans are non-central examples of animals, and the phrase “non-human animals” is commonly used to imply that humans are central examples of animals and that other animals should be treated like humans.
It’s literally true that humans are animals, but it’s also literally true that Martin Luther King Jr. was a criminal.
Deiseach, the topic we’re discussing here is whether it is in any sense in the best interests of a chicken to be stuffed into a tiny cage with nine of its conspecifics. If your opinion is that this isn’t a problem because cruelty to animals is the bee’s knees, that’s a different issue, although I don’t think it reflects well on you.
“Both large, factory farming enterprises, and small-scale, inefficient, family-owned farms exist in no small numbers – the latter in large part due to agriculture subsidies.”
Small family farms produce only a quarter of the country’s agricultural output, and the majority of their “no small numbers” are tiny sidelines with incomes under $10,000. The larger family farms are the most dependent on subsidies, and are often effectively franchises of major corporations, deeply in debt and with all the autonomy of a turnspit.
The situation is more dismal still in meat production:
Four companies make 85 percent of America’s beef and 65 percent of its pork. Just three companies make almost half of all chicken […] Decades of lax antitrust enforcement allowed Tyson Foods to buy most of it competitors, giving executives at company headquarters the ability to control production on thousands of farms and dozens of major poultry plants across the nation.
@ Jiro
Humans are very similar to what most humans see as central examples of “animals” — that is, to other medium-sized mammals. Though, in terms of sheer numbers, I guess insects are more representative?
Anyway, what sets central examples of animals apart from central examples of other lifeforms is their relatively large (non-microscopic) size, ability to move around freely, and consuming other organisms as food — and hey, humans have all that in spades!
It’s just one of those politically incorrect biotruths, I guess.
Humans are not central examples of animals in this context. Nobody thinks killing a chicken is 90% as bad as killing a person, or even 50% as bad.
@ Jiro
Sorry, I’m not really familiar with this killability-based classification scheme. How does it go?
Something like this?
– life, human: 100% bad to kill
– life, animal, cute: 10% bad to kill
– life, animal, other: 0.1% bad to kill
– life, other: 0% bad to kill
– other: N/A, function “kill” not applicable
More seriously, as far as I know, there is no other way to name the biological group of non-human animals besides “non-human animals”.
And then we had the commenter going on about chickens being “living, loving, experiencing beings” while dismissing “pro-foetus people” because a chicken is knowable, a foetus is not.
I considered using a more easily knowable example (dog, chimp, etc) but the thread had been using chickens so lI stuck to that for clarity.
Sorry for the “going on”. I was avoiding unknowable terms like ‘sentient’, ‘self-aware’, ‘conscious’, etc.
A better comparison, though less gentle, would be: “A foetus at early stage has about the same range of functioning as a permanently stationary parasite.” Perhaps I should have compared it to a plant, but I’d draw the same kind of line between a bean sprout and a bean vine.
This comment and my first one could be a lot more clear if I didn’t have to leave now.
Fink, I’ve seen a flock of hens with six acres to roam henpeck an outcast to death
We don’t go around referring to a crime problem of “non-Martin-Luther-King-type criminals”, and when we talk about job discrimination we don’t talk about “non-ability-to-do-the-job-based discrimination”.
And when someone tells me that what astronomers do is study stars, I have yet to have someone tell me “non-sleeping astronomers study stars”.
It is assumed that we are referring to central examples of a category if we just name the category. Call them animals. Everyone knows what you mean; even those people who insist on using “non-human animals” know very well what you mean by “animals”–they just don’t like that they can’t sneak in the connotations of non-central examples.
@ Jiro
So, in a biological context, humans are certainly animals.
In other contexts, there is some variety and disagreement.
“Non-human animals” is a perfect solution that prevents ambiguity regardless of context.
@Nita
Do you really believe that there are not distinct connotations to “non-human animals” as opposed to “animals” and that this difference is not being used in an attempt to frame the discussion?
I would say yes to both, but maybe I am simply oversensitive to these sort of things.
I also cannot come up with a relevant example where it is not clear whether “animals” includes people or not.
@ Tom
Both calling non-human animals “non-human animals” and using “animals” to mean “non-human animals” frame the discussion. You perceive the latter as neutral simply because you’re used to it.
The main thing I dislike about the traditional use of “animals” is that it contradicts biological fact. It just reminds me of the “rebuttals” to the theory of evolution that go: “they think we are descended from monkeys, LOL!”
Paraphyletic groupings are common even in biology. Reptiles, for example. Or dinosaurs, although I’ve been hearing more people talk in terms of “non-avian dinosaurs” lately.
BTW, how comes people often use “apes” and “animals” to mean “non-human apes” and “non-human animals” but AFAICT hardly ever use “primates” or “mammals” to mean “non-human primates” or “non-human mammals”?
At a guess? Probably because “mammal” and “primate” were coined to describe taxonomical groupings and “animal” was (probably) coined to describe a functional/behavioral one. When you’re using a word that’s all about taxonomy, there’s not much reason to exclude humans from it if they fit the criteria; it’s not like you’ll ever tell your kids not to eat like a mammal.
I’ve seen “ape” go both ways.
Nathan, I doubt very, very few animal rights campaigners ever heard the expression of “a sow eating her farrow”.
There’s room for improvement of course, but as you say – things like sow stalls and cattle crushes are invented for a reason, not merely because farmers want to cause suffering.
“There are virtually no autonomous Joneses and Steves out there anymore, just major agricultural corporations and their obedient debt-slaves,”
*dies with laughter*
Off hand? I know….probably two dozen separate families doing this. Personally. Just from visiting the family I have still doing it. They’re hardly the only ones.
But I didn’t realize you were a vegan- I just thought you were like- a normal human worried about factory farming, which has it’s downsides.
Hey, vegan- ever seen what happens to a family of ducklings that gets hit by a combine? I have.
I mean- you do what you can to prevent it, but hey- it’s a big field, they’re good at hiding.
#ducklivesmatter.
But you keep on with your….moral cause- it’s pretty clear we’re not gonna have any useful dialogue.
Meanwhile, thanks for reminding me to register for hunting season.
So, some people are rude on the internet for **Justice**. They are sometimes called “social justice warriors”. But what shall we call those who go out of their way to mock the moral feelings of others for **Teh Lulz**? “Social injustice warriors”?
Unhelpful.
@Nornagest:
Can you clarify who you think is being unhelpful here?
Edit: Thanks
Your anecdata is not evidence, sorry. I gave the figures elsewhere: four corporations control 85% of the beef and 65% of the pork, while three corporations produce half of the chicken.
It is true that harvesting inevitably kills some field animals. It is also true that almost all livestock in the US is grain-fed, which means, because animals take in far more in plant nutrients than they produce in meat, that several times more field animals must be killed to produce a pound of beef than a pound of tofu.
I am not a vegan.
Otherwise, between this and the earlier comment about how animal rights activists are “pretty much universally fucking morons”, I think you’ve made it abundantly clear that you are not much interested in rationality or civility. Enjoy killing things for sport, you are better suited to it.
I was talking about CJB — Nita’s post came in while I was writing mine.
(I don’t think hers was very helpful, either. But I wouldn’t have called it out as such.)
I thought it was a joke following the setup by Nita:
– But what shall we call those who go out of their way to mock the moral feelings of others for **Teh Lulz**?
-Unhelpful
This fits my experience perfectly.
“There’s an obvious split in Effective Altruism’s reaction to the article, with some people endorsing Dylan’s piece as a very important and accurate critique of EA, and others taking issue with its shoddy argumentation and politely expressing their disapproval. I have not done a rigorous survey but I suspect this split divides almost entirely along animal rights lines.”
Politely, I don’t think that’s accurate. The people endorsing Dylan’s piece generally seem polite. One of the ones I noticed was Tom Ash, who I think is sympathetic to animal rights and other causes but has mainly worked on poverty (when not doing broad multi-cause EA work).
I think even if the people making these complaints are wrong about the ratios, they’re still often getting at something real that I think deserves complaint in some cases, which boils down to the argument in favor of non-101 spaces.
It seems to me that ideas that are sufficiently contrarian or weird/extreme in a given (sub)culture often take up much more space than the mainstream views in discussions, and crowd out more discussion of the mainstream views once they hit a critical mass in a community/discussion. That critical mass probably isn’t very high, probably only a smaller fraction of the community (like 1/10 or 1/8). These sorts of extreme and/or contrarian views often challenge the mainstream on basic assumptions, and so the mainstream feels a need to respond to these, which takes up time and space that could have gone have their 201 or 301 discussion about their mainstream views. Additionally, they often have more vocal and passionate adherents than people with mainstream views. Over time, this can slowly make people with the mainstream views feel less comfortable until the nature of community is fundamentally different than what it was intended to be. Also, they just stand out more, which creates a gravity around them both for people in the discussion and people reading along.
My experience is that people with contrarian and/or extreme views for a (sub)culture often drive the direction of discussion far more than you would think given their numbers. When they post, things are immediately derailed to addressing the more foundational 101 discussions rather than having more 201 or 301 discussions, because those are the only issues they can engage with. They don’t even have the language or beliefs to discuss the 201 or 301 issues because they have foundational disagreements. A few MRAs on a feminist forum can turn every other discussion into one about the foundational beliefs and assumptions about feminism (or visa versa). A few people who think the singularity is not near can make a thread about FAI into a thread about if it is even rational to believe the singularity is near (or visa versa).
It seems to me that somebody sympathetic to the global poverty aspects of EA can make that argument about effective altruism. Yes, the mainstream of EA is still malaria nets and such, but if you look at the EAs who are getting the most attention, and the issues getting the most discussion, it is significantly disproportionately NOT global poverty but veganism, animal rights, x-risk, superintelligent AI, etc. and that this both hurts EAs from a PR standpoint, will dilute the number of people in the movement who care mostly about global poverty and not animal rights or AI (because people who care about animal rights and AI will be disproportionately drawn to the movement now that those are getting disproportionate attention) and in the long run may even just change the nature of the community entirely, and will prevent or overshadow 201 or 301 discussions about global poverty in the community because they have to spend more time just arguing that global poverty still deserves to be the main priority, which is a foundation-level argument and belief.
If his complaint is that debates over AI take up a disproportionate amount of space, it’s kind of unfortunate that he chose to address that by starting a debate over AI.
I have added a link to Toxoplasma of Rage in the original.
@LTP:
That’s a really interesting point. In my case on this blog itself, I have lots of foundational disagreements with the standard LW worldview. So if there’s a 101 subthread about foundational issues, I’ll often jump in. But when Scott posts something with a more 201 or 301 theme about how to do LW-style stuff better, taking LW views as axiomatic, I generally just try to keep my d*mn fool mouth shut and lurk. I wonder if anyone else here thinks about it that way.
Yes.
I don’t feel the need to talk about my disagreement with some core axiom of the Rationalist worldview when the discussion is about some tenth derivation of that.
Though I might point out when the discussion turns to how to apply that derivation to society that not all of society will go along (usually someone else will have put it better than I would have anyway, though).
“But when Scott posts something with a more 201 or 301 theme about how to do LW-style stuff better, taking LW views as axiomatic, I generally just try to keep my d*mn fool mouth shut and lurk. I wonder if anyone else here thinks about it that way.”
I always considered this to be sort of….basic human behavior. Like, if people are debating classical music vs. contemporary music, I can horn in with all my thoughts on beethoven. If people are discussing Jay-z Vs. Nicki Minaj, jumping and saying “I don’t suppose you people have heard of a gent named LUDWIG VON BEETHOVEN HMMMM?” doesn’t make me smarter. It makes me kind of a jerk.
I’m generally hesitant about “no-101” spaces. Honestly, and this is based on feminist websites, admittedly- the discourse ends up getting much, much more simplistic and you just end up with people going “Well that’s an example of fatshaming” and everyone nods, as opposed to more open spaces where someone goes “how?” and you have to defend your premises, perhaps learning new things in the process.
I imagine that the point where this gets more complicated than basic human behavior is where the 301 discussion takes for granted a premise that the “outsider” considers abhorrent. For instance, an ethics-driven vegan happening upon a conversation about the best way to raise taxes to fund a new slaughterhouse.
What are particularly good examples of the dynamic you are describing. Lesswrong seems to have quite alot of anti-singularity people and yet the discussions about AI seem to get sufficiently detailed. Robin Hanson himself is anti-foom (Though he believes in a different extreme vision of the future). Yet Robin Hanson and friends have not derailed the foom discussions.
I think the Men’s Right’s forum manage to discuss men’s rights perfectly well dispute the fact they make no controls to keep out feminists? (I am very pro Warren Farrel, I am not pro the modern Men’s Right’s movement).
What is a good example of a community getting derailed by being too tolerant.
*I know the dark enlightenment people are obsessed with stopping entryism. But they seem to have a theoretical model where “any community not explicitly right wing becomes left wing.”
It seems to me that the modal picture of a community dying because it’s too tolerant doesn’t look like an ideological takeover, it looks like focused discussion getting crowded out by trolls and cat pictures and ideological spats. I’ve seen that lots of times. Sometimes it can lead to an ideological takeover when one side is significantly louder than the other, but that’s not phase 1.
We live in an age of great affluence here in the USA, and consequently we possess extraordinary communication tools and lots of leisure time with which to discuss and debate endless abstract topics. We do this (in part) because there are no real-time existential risks in our current society such as those faced daily by our ancient evolutionary ancestors. And yet, we are descended (and evolved) from people that overcame these risks. Can we continue this on this path of evolution in the absence of all risk? Is excellence in hand-wringing a survival trait?
Attended EA global, and I can say that AI-risk discussions were happening quite frequently, with a lot interest around them. Several people I spoke to thought MIRI was doing the most good per dollar.
Even looking at that EA survey, you are underselling the popularity by comparing just one number in a fairly small sample. MIRI is the 4th most popular charity on the survey (in terms of people supporting). And if you go in familiar with Peter Singer’s work and having been sold on a global poverty reduction movement, it’s very weird to see MIRI and existential risk given pride-of-place in talks.
I also note that MIRI proponents seem to not care at all about judging if MIRI is actually effective. Few have made any effort to judge the quality of their output or any effort to read their technical papers. The people I know with technical backgrounds are constantly excusing MIRIs lack of technical work. Karnofsky judges MIRI as ineffective in that famous LW post, but people in an EFFECTIVE altruism movement seem to have no problem with that?
I’m not sure how that doesn’t prove my point. Soccer is the 4th most popular sport in the US. Would you say that soccer is dominating the sporting world, and that the frequent discussion of soccer is a problem because other sports are being marginalized?
I am sure some people thought MIRI was doing the most good per dollar. I am also sure some people thought AMF, ACE, etc were doing the most good per dollar. How come the former needs to have a federal case made out of it, as opposed to being One Part Of The Great Tapestry Of Cause Neutrality?
I think the issue here is that you didn’t attend the conference and are thus aiming at a vague target.
With one exception, the main talks were entirely given by “the movement”- they were things like The Future of EA, The Future of Philanthropy. AI-Risk was the ONLY specific cause given a main talk (not a parallel session). Roughly half of the main talks were AI-risk related.
MIRI is treated NOT as a charity along side AMF or ACE. MIRI is treated as a specifically EA organization along with Givewell, 80,000 Hours etc.
MIRI gets a seat at the table and can advocate for it’s own cause at this conference, on the mainstage. AMF does not.
@Professor Frink
“AI-Risk was the ONLY specific cause given a main talk (not a parallel session). Roughly half of the main talks were AI-risk related.”
Thank you for this. That was my biggest issue with the article. Dylan Mathews said AI risk “dominated” the conference, but never tried to quantify that statement at all.
Can you break that down in more detail? I think that might go along way towards establishing that Dylan’s reporting about the conference is correct, whether or not one agrees that this accurately reflects the current direction of EA.
@ Prof. Frink
I complained about the AI panel (and preceding intro by Bostrom) being whole-conference events while other cause-specific things weren’t beforehand, and was told that the reason was that it was thought that Elon Musk was going to be a huge draw, so that this would be unfair to simultaneous talks. It still seems questionable, but it did put in a somewhat different light for me.
Maybe a good solution would have been to have Elon compete with talks by EAG staff/volunteers. Then we don’t have people like Robin Hanson or Holden Karnofsky feeling like they were given an unfair speaker slot.
Could you say more about the distinction you’re drawing here? Usually the analogy I hear is ‘ACE is to animal welfare charities as GiveWell is to third-world poverty charities.’ ACE also started off as a division of 80,000 Hours.
At a glance, it looks like the featured speakers at the California EAG event were Jacqueline Fuller, Nick Bostrom, Elon Musk, Will MacAskill, and maybe Holden Karnofsky. Bostrom and Musk were there for a panel on AI with no competing events, so I agree AI got top billing, or something close to it. At the Melbourne EAG event, animal welfare and global poverty are getting top billing (because Peter Singer and David Pearce are there instead of Bostrom and Musk).
Regarding who got to use Olympic: GiveDirectly, AidGrade, Animal Charity Evaluators, and Mercy for Animals gave talks on the main stage. Reps of Evidence Action, GiveDirectly, Fistula Foundation, ACE, etc. are also speaking at the Melbourne and Oxford events.
MIRI got to give its own shorter (five-minute) talk in a smaller room, along with GiveWell, OpenPhil, The Life You Can Save, GiveDirectly, FHI, FLI, the Global Priorities Project, Giving What We Can, 80,000 Hours, ACE, REG, Leverage, EA Action, and Charity Science.
Bostrom was not only on the panel, he gave the lead in talk.
If we agree that AI risk got top billing here, then I think the reporting in that Vox article mainly carries through. He went to an EA conference in the Bay Area and found it really was dominated by AI-risk discussions. Scott’s referenced survey is irrelevant here, the author was reporting about the conference itself.
Personally, it really did feel to me like day two of the conference was primarily about AI-risk, and not EA. Maybe other conferences will be different, but obviously I haven’t been to those other conferences (I don’t think they’ve happened yet).
And it was clear from EAs at the conference that most of them consider MIRI and CFAR (and Leverage research) to be “EA organizations” in a way that they don’t consider cause-specific charities like AMF. I think MIRI might be the only specific charity (as opposed to an EA-like charity recommendation.
This is a hard line to draw, but I’m thinking of things like GiveWell,ACE,OpenPhil,etc) to give a talk at the event. i.e. most of EA is saying “here are good,effective causes to give money to” and MIRI was one of the few groups invited in to say “GIVE YOUR MONEY TO ME.” Leverage might be another one, but I’ve honestly never understood or even tried to understand what they are on about.
Yeah, I listed Bostrom, MacAskill, and Fuller as “featured speakers” because they got their own single-track talks (that weren’t about introducing EAG). I listed Musk and Karnofsky as featured speakers on more subjective grounds: Musk was the most heavily-advertised EAG speaker, and Karnofsky got an Olympic speaker slot to himself and a lot of Q&A time.
I’m still not clear on what you mean by “EA organizations”. MIRI is more socially connected to Bay Area EAs than AMF is, and there’s definitely more cultural overlap between ‘typical people who worry about superintelligent AI’ and ‘typical Bay Area EAs’ than there is between ‘typical people who worry about malaria’ and ‘typical Bay Area EAs.’ But it’s not obvious to me whether GiveDirectly is closer to AMF or closer to MIRI on this metric, and GiveDirectly spoke at EAG multiple times. (Evidence Action / Deworm the World is also giving multiple EAG talks.)
I also view MIRI as a relatively object-level charity, but I don’t think an organization has to be meta in order to be ‘real EA’, and I’m not sure if ‘object-level charity’ is the distinction you have in mind (since you originally classified ACE on the ‘object-level’ side, if I’m understanding you).
The distinction between object-level and meta-level charities is often pretty fuzzy, and organizations can change roles over time. GiveWell was originally just a charity recommender, but it’s now pivoting toward becoming more of a funder (in partnership with GoodVentures) and a center for new research, with an eye toward becoming an object-level EA intervention incubator in the future. CFAR, FLI, FHI, EA Policy Analytics, and the Humane League are object-level in some respects, meta-level in others.
MIRI’s current focus is on advancing an extremely specific set of technical problems, so I’d say it’s an object-level research nonprofit (though it was much less so when it prioritized outreach and rationality training). But MIRI also funds strategy research that’s intended to help us figure out what the best way is to mitigate long-term technological risk (under the AI Impacts project), and one of the long-run goals of MIRI’s narrowly targeted research is to give us a better big-picture view of what the highest-value computer science research areas are. If 15 years down the line MIRI becomes an incubator for high-value AI research organizations and GiveWell becomes an incubator for high-value global health and poverty organizations, it might become less clear which of the two is more meta v. object.
EA orgs’ roles can change, and both object-level and meta-level charities can come to EA events asking for more funds. (GiveWell certainly asks EAs for money to help fund its own operations, distinct from the money it gives to its top charities.)
Wasn’t MIRI originally the Singularity Institute?
I might be screwing up some history, but I think it has always been an AI-risk specific, technical research tank. I thought the focus on rationality training was just a side blogging project of one of it’s researchers, not the main focus of the institute?
I guess I’ll say to try to make my distinction clear, MIRI was the only group at the conference that I felt wanted money for itself. The other talks I attended seem to have wanted money to pass along to other effective organizations. Most of the EA organizations I’ve interacted with were about the meta, as you put it, the movement, finding effective charities.
MIRI has never presented itself that way, as far as I can tell. And I’ve never seen a detailed analysis of whether MIRI is successfully achieving it’s technical goals. The closest thing I can find is very negative piece on LW by Karnofsky, and Scott’s tumblr debate. I don’t understand why EA doesn’t treat MIRI the same way it treats other “feel good/do nothing” type charities. Donate as consumption if you want.
Rob, this seems to be roughly the distinction you’re trying to draw and it already has a decently defined taxonomy with its own names, though recommender services aren’t a good fit with any of them, nor something like GuideStar that exists solely to provide information about other nonprofits.
Professor Frink: A lot of the Singularity Institute’s time was spent on running the Singularity Summit and doing other outreach-related work. We also spun off CFAR. The research team was smaller, and split its time between technical research and forecasting/strategy/etc. writing. The All Publications page gives a good picture of when MIRI pivoted from forecasting work to technical work (though we still facilitate some forecasting work under the AI Impacts project, for people who want to earmark their funds specifically for that line of research).
Many of the groups at the conference wanted money, though MIRI’s appeals may have been more memorable because we’re running a discrete July/August fundraiser. And, again, GiveDirectly gave multiple talks at the conference. GiveDirectly is one of GiveWell’s top recommended charities, and is it at least as object-level as MIRI. GiveDirectly has a large funding gap ($89 million according to GD, as of last month; a smaller amount according to GiveWell).
Most of our active research programs are very recent (we released our research agenda in late 2014) and haven’t yet been evaluated by the AI community; I’d say that the first step is for the academic community to assess our work (which we’ll hopefully be able to find ways to hasten) so that GiveWell/OpenPhil can use that as a resource for assessing whether our research agenda is on the right track. See my comments above and below.
The Open Philanthropy Project (a GiveWell offshoot) recently donated $1.2 million to an AI grants fund (in a pool with $6 million from Elon Musk), and MIRI received several of the relevant grants. However, OpenPhil was not in charge of who was awarded grants, and their view that AI and biosecurity are the top-priority global catastrophic risks shouldn’t be taken as an endorsement of MIRI’s particular AI approach.
Am I the only one here who, if I see Peter Singer is associated with something, it makes me want to run a mile in the opposite direction?
Like, if you want a sure-fire way to turn me away from Effective Altruism, saying Peter Singer is part of it, or well-thought of as a guru, or simply a guest-speaker is the way to do it 🙂
(It probably is just me, right?)
What’s your beef with Peter Singer?
He doesn’t like beef.
What’s your tofu with Peter Singer, then?
Probably the infanticide.
@ Randy M
What infanticide?
https://www.princeton.edu/~psinger/faq.html
A. Most parents, fortunately, love their children and would be horrified by the idea of killing it. And that’s a good thing, of course. We want to encourage parents to care for their children, and help them to do so. Moreover, although a normal newborn baby has no sense of the future, and therefore is not a person, that does not mean that it is all right to kill such a baby. It only means that the wrong done to the infant is not as great as the wrong that would be done to a person who was killed. But in our society there are many couples who would be very happy to love and care for that child. Hence even if the parents do not want their own child, it would be wrong to kill it.
“But in our society there are many couples who would be very happy to love and care for that child. Hence even if the parents do not want their own child, it would be wrong to kill it.”
Weaksauce. Obviously he would allow infanticide in any hypothetical society in which a child was not wanted. So, that infanticide.
I didn’t mean to imply that he committed infanticide, I don’t know, just that he justifies it, which is a legitimate thing to object about with him.
[edited out the pun that could be confusing and unhelpful]
No, it’s not just you. To use the example from the last thread, if anyone was going to suggest replacing the buffet’s hamburgers with viable fetuses, it’d be him.
Listening to utilitarians is one thing. Praising them is another.
>if anyone was going to suggest replacing the buffet’s hamburgers with viable fetuses, it’d be him.
There may be legitimate criticisms of Singer, but this is just nonsense.
As far as I can understand them, his arguments for infanticide and against factory farming add up pretty nicely to it being morally acceptable to eat dead babies who were killed “ethically”.
Can you explain what I’m missing?
Really? I mean, other than for PR reasons? Isn’t Singer the one who thinks parents should have until about age 2 to “abort” their offspring, and who also wrote vegan manifestos? If yes and yes, then what’s the nonsense?
That’s… extremely odd. I can square abortion rights with veganism by using one of those (shaky, IMO) bodily-integrity arguments or by observing that a human fetus before birth has about the depth of experience of an oyster (though it’s more neurologically complex). But both those arguments dissolve as soon as it’s born.
I guess what I’m saying is, cite?
“Similar to his argument for abortion, Singer argues that newborns lack the essential characteristics of personhood—”rationality, autonomy, and self-consciousness”[20]—and therefore “killing a newborn baby is never equivalent to killing a person, that is, a being who wants to go on living.””
From Wikipedia, which refers to the somewhat dubious sounding “Singer, Peter. Peter Singer FAQ, Princeton University, accessed 8 March 2009.” However, that was a literally 3 second search, so there’s likely more direct and first hand sources.
Huh. That looks an awful lot like a sacred value.
Disappointing to see in someone so dear to a scene that’s supposedly all about thinking past sacred values. But not that surprising, if I’m being honest.
Isn’t Singer one those suffering-based utilitarians? In that paradigm, torture / factory farming can be considered worse than murder / killing.
@Nornagest Ask and you shall receive.
https://en.wikipedia.org/wiki/Peter_Singer#Abortion.2C_euthanasia_and_infanticide
Yes. SSC forgive me if it’s uncool to post the same quote twice in the same thread. But I’m freaking out at the misquote/misreprentation in Wikipedia’s Singer page, and the fact that people who are into priors and fact checking have accepted the nonsense that is going around.
I posted Singer’s actual statement above. Here’s tl/dr on our current question.
[Although a newborn baby] is not a person, that does not mean that it is all right to kill such a baby.
https://www.princeton.edu/~psinger/faq.html
“that is because most infants are loved and cherished by their parents, and to kill an infant is usually to do a great wrong to its parents.”
@ Nornagest
a human fetus before birth has about the depth of experience of an oyster (though it’s more neurologically complex).
Getting back to abortion, this is the kind of distinction I was trying to make between a foetus and an animal after birth (human or non-human).
I disagree with Singer in lumping them together till age 2 or whatever. As soon as a baby is born, it starts to be an active creature involved with the outside world — struggling, crying, experincing discomfort and disliking it and moving purposely to eliminate it, and pretty soon experincing hunger and moving toward teats. I draw my line between this stage and the ‘oyster stage’.*
* Actually imo an adult oyster is more knowable than any species of foetus (except perhaps very late term).
@ Eggo
Here is how Singer’s quote in his faq goes on:
not a person, that does not mean that it is all right to kill such a baby. It only means that the wrong done to the infant is not AS GREAT as the wrong that would be done to a person who was killed.
I don’t agree with Singer’s use of ‘person’; imo the moment a baby is born it is a person. I disagree with Singer on quite a few things — but I don’t want to see him or anyone greviously misrepresented.
@ houseboatonstyx
Eh, surely the fetus begins to experience things gradually, as its nervous system develops? The womb may not be a very rich environment, and there’s not a lot to move away from or toward while you’re in there, but some sort of switch being flipped during birth doesn’t seem very plausible.
Also, it might be my anti-sessile prejudice speaking, but aren’t oysters pretty simple? Especially adult ones.
@ Nita
some sort of switch being flipped during birth doesn’t seem very plausible.
Many hours of struggle through a birth canal, suddenly getting new hormones from a very stressed mother, umbilical flow disturbed then ending abruptly so that the system has to switch to breathing air, might flip a switch or two, gradually and/or suddenly. I’ve seen some respectable speculation about that, though the only evidence I remember was about maternity nurses being able to identify which babies were caesarian because they lay still and unresponsive in their cribs for days “as though they aren’t really here yet.”
There may or may not be a sharp change in the territory (the foetus), but there’s a sharp edged line in my map — the same kind of ‘bright line’ that some people draw between humans and other animals or between animals and plants. I draw it along the edge of my understanding. I can make a try at understanding what sort of experience and reactions an adult oyster might have, but a foetus in the womb is beyond me (along with individual microbes and such).
I find it so very weird that everyone looks at the depth of past experience. Past experience isn’t what we’re taking away, and it shouldn’t be what goes in the consequentialist calculation.
@ Caue
Past experience? I’m talking about a foetus’s present experience/functioning and present capability of it.
Did you mean talking about future rather than about past?
I may have misunderstood what was meant by “depth of experience”, not sure. But the point is the same: when I think about why it’d be bad to kill someone, what would that someone lose, arguing that the fetus currently has no complex neurological activity looks identical to saying the same thing about an adult in a coma from which we are sure they will wake up. The difference between the fetus and an oyster, however, is so blatant that it baffles me that anyone would bring this up.
“SSC forgive me if it’s uncool to post the same quote twice in the same thread.”
Not uncool, just not any more persuasive the second time.
I was honestly hoping to be shown to be wrong about Singer, given the esteem Scott apparently holds him in, but if that is your defense, I’m not moved.
@ Caue
It was Nornagest’s oyster, I just borrowed it as an image of a sessile being in a very limited enviornment. But the oyster is a fully developed creature doing its own sustainable thing, which imo is more knowable than a foetus.
what would that someone lose, arguing that the fetus currently has no complex neurological activity looks identical to saying the same thing about an adult in a coma from which we are sure they will wake up.
Here you are talking about what you think their later future may be, rather than what their present and near future is.
The man, or any other adult animal, is complete, functional, knowable, except for whatever accident
has paused his normal knowable activity. The foetus at best will someday be replaced by an independent, functional human.
@ Randy M.
No, you’re missing the key sentence from the faq, which says killing the baby would be a wrong to the baby.
Houseboat, I think that’s reformulating the position I’m arguing against rather than addressing the point. But I’ll leave this for a more appropriate occasion.
@ Caue
Yes. The split hairs are becoming confusing. My thanks to you and Nita and Nornagest for helping me to clarify my position (at least to myself).
Note to self before I forget: Action-wise, I have always tried not to harm even a beansprout without a good reason. Principle agrees with that action, saying there should be no bright line, no live [noun] should be considered worthless. Sanity-wise, I’ve been drawing the same kind of bright line that most people do, just drawing it in a different place (ie at the limit of my knowledge). But “I cannot imagine what X is experiencing/wanting” is not “X is a zombie”. I don’t insult oysters, so I shouldn’t insult foetuses either, nor zombies for that matter. I do support abortion, and do eat oysters, but I neither support nor eat zombies.
I’m going to bed.
Pictured above: just one of the many moral stances you’re unlikely to see articulated outside this fantastic forum.
I’m not very invested in this metaphor, but to unpack it a little more: when we’re talking about reinforcement learners, which humans at least to a large extent are, past experience — the training set, in AI terms — has a close relationship with present behavioral complexity. An untrained reinforcement learner (using most common architectures) is running the same algorithm and taking up the same amount of memory as a trained one, but, per bit, it’s doing far less work.
Put another way, an oyster with a brain the size of a planet would probably not have much going on upstairs.
Not just you. I found this place, immediately loved Scott’s writing, and then got a bit of a nasty shock when I saw him talking about Singer in approving terms.
Definitely not just you.
To paraphrase my previous FB comment on the matter, personally I used to even have a non-AI-inclusive picture of EA (though the European LW meetup managed to shift my perception a bit), and wasn’t sure if I should count myself in the movement if I I’m mostly interested in supporting x-risk mitigation. Particularly, I haven’t done the GWWC pledge as they concentrate on different cause areas even if they ended up broadening the pledge.
But if people are having concerns about us AI-concerned white C(i)S males being possibly _overrepresented_ in EA, even if those concerns are perhaps exaggerated, it seems I might as well count myself as part of the crowd. (I do wonder if I should do the GWWC pledge as well to boost the numbers; are AI x-risk contributors actively welcome there or is it more like “I guess you can join up too”?)
I’m pretty sure they want more people who are promising to donate to make the world better more than they want to preserve the focus on charismatic causes. “I guess you can join up too” is not “Ugh, you can join if you insist”.
On the last few paragraphs, one random person’s impressions:
The comment section here feels like it leans way rightward, but I think what’s actually going on is that it has roughly equal numbers of lefties and righties but the lefties are almost all very moderate and a substantial fraction of the righties are really far to the right.
I agree with Scott’s self-characterization w.r.t. neoreactionaries and social justice. I think some people who characterize Scott as a neoreactionary sympathizer would agree that what he does is to treat neoreactionaries with the same level of respect and intellectual interest as everyone else, but would suggest that that’s a moral error (“would you treat neo-Nazis with the same level of respect and interest?”) or an intellectual error (“would you treat circle-squarers and young-earth creationists with the same level of respect and interest?”). I sympathize with them on both counts, though I think it’s clear that neoreactionaries are neither as evil as Nazis nor as stupid or ignorant as young-earth creationists. I also don’t know how Scott actually treats neo-Nazis and young-earth creationists. Probably with respect and interest, but the same kind of respect and interest as neoreactionaries or moderate liberals? Maybe not.
[EDITED to add:] A slightly steelmanned version of the “waaah AI x-risk is taking over EA” complaint goes along similar lines. “Quite right: AI x-risk is taking up only, let’s say, 10% as much time as stopping people dying of malaria. But that’s still way too much, because AI x-risk is a silly thing to give much attention to, because malaria is a vastly bigger problem, much more than 10x. So, yeah, the EA movement isn’t being dominated or taken over by AI x-risk concerns, but it has been persuaded to give them far more attention than they warrant.”
That might be badly wrong too, but it’s a different kind of wrong.
There do seem to be a lot of NRx folks here. But aren’t there also a fair number of communists, or is that just one or two people? I’d be curious to know if the numbers are comparable.
If there are lots of communists, they spend a lot less time talking about communism. There are an unusual number of vocal neo-reactionaries, unusual for their
a) extreme positions: most right-leaning people want to emphasise they share left-wing values (and vice versa), whereas NRXs often directly challenge things not even seen as left-wing any more due to being so mainstream
b) tendency to present their position as obvious, which in turn implies they assume everyone here agrees with them. They just sound like the ‘in group’, despite the fact that they do get challenged quite a lot
c) stylistically very unlike right-wing people I meet in real life. This is partially the ‘explicitly embracing stating the most controversial bits of my belief system’ I’d expect with the rationalist context.
Also, awhile ago, this blog really did seem to be a ‘safe space’ for some VERY extreme right-wing positions, both in terms of content and presentations. That guy whose avatar was a chicken, for instance, who I think was banned? James McDonald? The anti-SJ is connected – there are still an unusual number of posts that come across as straightforwardly, casually sexist to me.
Scott personally is incredibly coscientious in giving SJ types a fair hearing, but this comes across as a result of truly heroic conscientiousness in general being applied to very visceral underlying anti-SJ feelings (which may be justified: for someone who lives in the UK and doesn’t use tumblr/reddit/whatever, I don’t seem to live in the SJ environment this blog refers to. In fact, I think this blog may have taught me the concept of SJ as a group.)
James A. Donald, AKA Foghorn Leghorn.
Just wanted to say that this: https://slatestarcodex.com/2015/08/13/figureground-illusions/#comment-227112
…doesn’t apply to Jim. Even when he says something true, he says it in the most vicious, downright nasty way imaginable.
As someone whose only recently started participating, I was/am surprised by the number of religious people. Which is not to say unpleasantly surprised, just surprised.
My feeling is that that demographic has increased recently, for some reason. I was struck by the large number of religious people commenting on the last post (or some other one this week; I forget).
The communist contingent used to be more vocal than it is now, I think. I think we see less of Multiheaded now, and I think there were maybe some others that aren’t around any more.
Maybe there are just a few of us religious types, but we’re just too prolix? Sorry about that….
I think we’re comfortable about speaking about religion here and even admitting to be believers because criticism and even opposition is not going to be the standard Y U SO DUM? you get elsewhere when you admit to being a believer, and even worse, not one of the nice, safe, cuddly, liberally progressive socially and theologically ones.
This is the statistic I am most interested in seeing in the survey. My sense is that there are proportionally more religious people here than on LessWrong (though I’m confident we’re still a minority), but I could be wrong about that.
I think Scott’s style and interests are much more likely to be interesting to a broader, less EY-adjacent demographic than LW. Since religious folks are a LOT less EY-adjacent in our worldview (and usually in our interests and affect, too), I’d be really, really shocked if SSC didn’t have more of us religious folks than LW.
I myself can only think of two others who post here. I know a number of other communists who share SSC articles they find interesting, and who are familiar with the culture here, but haven’t shown up in this comment section to my knowledge (and I think I would recognize their writing styles.)
Yourself, Mulit, Eli, more recently another Russian person with an avatar of a dark haired young person… if we expand into sjw-types, (more social than economic but not necessarily less left) there’s Barry and at least a few more several more.
Prior to being introduced to this place through a Facebook friend about a year ago, I’d encountered two communists ever in 33 years of life and had never even heard of neoreaction. Either I’m just way out of the loop or this place attracts fringe people in general.
I wonder sometimes how much of this is predicted by basic sunk cost psychology, that if you’re going to spend a decade or more of your life devoted to refining your own rationality, you can’t possibly do all that and then end up with the same boring mainstream views as if you’d never done it, so wherever you end up coming down, it won’t be the middle.
That might be part of it, but also consider that mainstream views change wildly on the timescale of just one or two human lifetimes. When my grandparents were born there were a lot of intellectuals who supported the British Raj and there was another large group of intellectuals who supported Stalin. It is possible that the comfortable mainstream views of our time are special, and will stand the test of time, but perhaps not likely. What is more likely is that our current mainstream views have a horrendous failure mode, and I and I suspect many others on here are very interested to find it. NRx is one recent attempt at doing so. Communism is a failed attempt at doing so for the 19th century.
@Adam @Oliver Cromwell:
I think you’re both right. Adam is right that SSC tends to attract some pretty odd folks, because a lot of it is about niche interests that the median person finds a lot less interesting than, say, politics or celebrities. OTOH, Oliver Cromwell is right (never thought I’d write that phrase!) that the views of our era and milieu aren’t selected to be correct, so people who spend a lot of time worrying about punctiliously justifying their beliefs are more likely to adopt beliefs that are maybe a bit far from the local optima that the First World c. 2015 happens to have settled on through a combination of trial ‘n error and what Scott calls “Moloch.” Amplifying Oliver Cromwell’s point, I think SSC, like LW, is just sort of a contrarian magnet. So you get far left anti-bourgeouis contrarians, and libertarian meta-contrarians, and NRx meta-meta-contrarians (along with at least one Thomist meta-meta-contrarian).
That’s not something to also consider, though, it’s just a different way of saying the same thing. Of course it’s very unlikely that mainstream views today are going to remain mainstream views in 50 years, which is similar to the observation that in any given season, the team with the highest odds still has lower odds than the field. This doesn’t tell you where to come down, though, just ‘not the middle,’ but nobody ever bets the field. If there’s a sure bias that humans have, it’s taking a stand one way or another even when they really have no clue what’s right.
@Adam:
If nobody bet the field, then no one could bet on individual teams.
The house bets the field. The reason the house bets the field is because a) the usually structure the odds so they make money based on volume of betting, so they want people to bet, and b) Most people aren’t interested in making a bet that pays of at 1.05 to 1. If they wanted that they would just put their money in a CD and have almost no downside risk. The house is happy to do this because of, again, volume.
What proportion of comments do you think you can infer enough of a worldview from in order to label people as left or right? For me, I’d say less 1%. I have no idea what proportion of commenters are left or right, or moderate left or extreme right, or anything like that.
And given that the left/right distinction is incredibly coarse and, ultimately, pointless, why would you want to?
Most comments here are made by regular commenters, so if you hang around long enough to learn people’s names you eventually get a feel for their politics. After a while, it is impossible not to notice that guys like Multiheaded and Eli are on the left while guys like Steve Johnson and Mai La Dreapta are on the right.
That makes sense, sure. So, what proportion of commenters on this page do you think you have a pretty good read on the politics of?
“The comment section here feels like it leans way rightward, but I think what’s actually going on is that it has roughly equal numbers of lefties and righties but the lefties are almost all very moderate and a substantial fraction of the righties are really far to the right.”
The survey thing I looked at was a rating of political beliefs from 1 (far left) to 10 (far right). There was no difference in extremism levels.
One possibility is that being far left is much more socially acceptable (and commonly expressed) than being far right. It is OK to say that Lenin is your hero and wear a Che t-shirt; less so to say your hero is Tsar Nicholas and wear a Pinochet t-shirt. Faced with equal numbers of such people, the latter group is considered more visible and alarming.
But another problem is that LW/SSC has barely any rightists in the sense the general public would understand, so their answer to the question is difficult to interpret. Would Moldbug be 10 on the scale? He’s certainly an extremist, but he’s also nothing like Hitler, which is what most people would probably interpret 10 to mean.
Are there Pinocchet shirts? Or even Franco shirts? That sounds like it would make for a fun conversation peice.
There are both and probably less fun than you’re imagining.
I’ve never seen a Pinochet shirt, but I have seen a Che shirt that gave Che Mickey Mouse ears.
IME: there are many Pinochet – as in, pro-Pinochet – thinkpieces, econ blog posts, occasionally even articles in mainstream econ press, etc – but a drop since the crisis of ’09-10 and inequality growing into a liberal buzzword, I think?
Che doesn’t seem to be such a big deal among the leftists who favour or support the Cuban government. I think he’s much more of a generic brand than specifically an icon to leftists nowdays?
A version of the Che Tee.
That’s some terrible graphic design.
I think that’s maybe true in general, but I’m not sure how true it is in the SSC comments section specifically. Certainly I think anyone who was going to defend Che Guevara in this blog is going to do so with full awareness of what that entails, and is aware of who the equivalent figures on the right would be.
g’s comment does mirror my anecdotal experience. To put it in terms of the above, it seems to me that there are a lot more people who would be willing to defend Pinochet than Che. I’m sure that’s to a significant extent a result of my biases (being somewhat… left-liberal-ish). But I don’t think you need to resort to figure-ground or social acceptability to explain why arguments appear differently to different people – I think it can be explained mostly as a consequence of different perspectives. Like, I don’t think it’s a particularly surprising result, really.
On this blog yes but the whole point is that this blog does not reflect wider society, either the mass culture or the intellectual/academic culture. You go from an environment in which Che’s face is used by totally apolitical people to attract girls into one where he is a lightning rod for strident moral criticism. You believe you have entered an insane right-wing parallel reality. In fact you are only receiving the sort of treatment someone who wore a Pinochet shirt would receive at the debate club of any half-decent college campus in America. Those people are *not* experiencing the real world as one that constantly reinforces their values, so the situation is not symmetric.
@ Oliver Cromwell:
Sorry, I don’t think I was clear enough.
My argument was that I don’t think anyone who comments on here would defend or idolize Che in the naive kind of way you’re describing here. Anyone on the SSC comments who would defend Che is aware that he is a lightning rod for strident moral criticism and is doing so anyway. There are maybe a few people who have the kind of social-normality response that you’re describing (ex, the person somewhere else in the comments for this post who wrote a reply helpfully pointing out that most people in society would consider NRx people racist weirdos) but I don’t think it’s an accurate characterization of most of the regular commenters.
On the other hand, it seems to me that a certain amount of this kind of thing is a natural result of differences in perspective. If you take, for instance, two arguments – on the one hand, some kind of argument in favor communism; on the other, the NRx argument that progressivism is a more or less intentional program for the destruction of Western civilization – you would expect people to interpret and evaluate those arguments differently based on your perspective. Someone who’s more left-ish will be more sympathetic to the communist argument, not because society condones communism, but because they agree with more of the underlying values and logic; someone who’s more right-ish will see it as more extreme and hostile because they reject more of the underlying values and logic. And vice-versa, of course. This does not seem like an especially surprising outcome.
So I’m just not sure why we have to go to the point that people are unable to see things against the background radiation. And it’s the kind of argument that I tend to be skeptical of to start with.
I’m pretty sure this is wrong, at least if we’re talking about the US.
The average person, whether they vote chocolate, vanilla, or strawberry, doesn’t know a lot about history; their knowledge of any particular figure is limited to what they remember from world history class in high school and how they appear in pop culture.
Neither Che Guevara nor Pinochet is going to get more than a paragraph in a high school course, unless you’re going to school in Miami or somewhere else that justifies a greater-than-usual focus on Cuban or Chilean history. That leaves pop culture. Pinochet is basically a nonentity there; he’s one name in a long list of tinpot dictators. I doubt the average person could even tell you whether he was allied with the US or the Soviets. Che has a bit more of a presence, but it’s still rare for people to know anything about what he did; they just know he was a charismatic Cuban leftist revolutionary.
So if people defend either one, it’s usually going to be on those terms. And I think you’ll find more people willing to defend the latter than the former.
>To put it in terms of the above, it seems to me that there are a lot more people who would be willing to defend Pinochet than Che.
I’d sat that has a lot more to do with contrarianism than with political leaning, few people except the Chilean extreme right would be willing to defend Pinochet without a thousand qualifiers, so it’s a enticing prospect.
I’ve struggled to determine who I’d defend, myself, but ended up siding with Che because fuck Chile.
@ nornagest: sorry, unclear referent. That sentence was specifically meant to describe this site, not society in general
As far as the popular perception, I’m sure it doesn’t help that nobody made a movie about Pinochet’s youthful idealist phase.
your hero is Tsar Nicholas
Which Tsar Nicholas? Nicholas I was conservative and repressive; Nicholas II tried to be moderately progressive but it was a bit too late by then; the revolution had more or less started and the old guard thought he was giving too much away and needed to crack down more.
Am I presuming too much to think you mean Nicholas I? 🙂
Why split hairs?
Petrus Magnus for Tsar of all the Russia’s!
Che- I assume that anyone who wears a Che shirt knows about as much about politics as someone wearing a WWJD bracelet knows about theology.
I’ve noticed that references to not just Stalin and Mao, but also Hitler are becoming more acceptable than they used to be.
Something like this series of jokes, for example, is far more acceptable, it feels like, than it would’ve been in 1990.
http://www.memecenter.com/search/mein%20kampfy%20chair
I’m sure there were people doing similar types of humor, but it was a lot more edgy. In general, it’s starting to feel like we’ve got enough distance to joke about it in ways we didn’t previously.
Or I could be totally misreading everything.
The comment section strikes me as mostly unpleasantly far to the left, with a substantial and loud minority who’s unpleasantly far to the right. Where my reasonable thinkers at? (Not touching anything political, is where they’re at.)
Keep in mind, people on the opposite side to you who are reasonable and somewhat close to the center can be mistaken for people on your own side since you don’t feel much urge to oppose them.
I consider myself quite reasonable (which obviously means I am – obviously), and FWIW, the main reason I don’t comment here as frequently as I’d like is that SSC gets a lot of comments, more than I can reasonably (ahem) respond to. By the time I see any new post, it already typically has 300+ comments. (Do these go up regularly at 12AM PST or something, and everyone checks right then?)
We have a subreddit in part for this issue, but no one is really using it.
I would have listed myself as a 1 or 2 on that scale even though I am vehemently anti-communist simply because I would have adjusted the scale to 5 being “the average SSC commenter.” I don’t think this is a great way to measure political extremety.
If you really wanted to do it correctly, I would suggest finding questions that are quantified on something like the DW-Nominate scale. Otherwise you’ll find a lot of “centrist” looking positions because of anchoring bias.
Good point. When I read those numbers, I imagine someone thinking of Peter Thiel-style libertarianism as “moderate,” and then extrapolating whether their NRx is THAT far right from that baseline.
I wonder if there’s any research on how people treat 1-10 scales if not given any explicit hints. Linear? Exponential? Completely crazy fashion that’s barely ordered?
Personally I don’t like them. When an interviewer asks “from 1-10 how well would you say you know java” I’m tempted to say 3, but I know I’m supposed to say 8.
On a scale of 1-10, how linear would you say your use of a 1-10 rating scale was, 1 being most linear and 10 being most exponential?
When I go to the doctor and he asks me to rate the pain I’m having on a scale of 1-5, I tend to interpret it as a log scale.
How big of a log do you have to scale for it to rate a 5?
Biggest log I’ve handled without machine help was about 38 board feet.
3i — I use 1-10 sinusoidally.
There was no difference in extremism levels.
SOME OF US ARE EXTREMELY MODERATE!
I probably – amn’t, though I’m immoderate about specifically Irish political things in general, rather than American things 🙂
Bono: yay/nay?
Bono – most Irish people agree he should stick to the singing and for fuck’s sake give over the preaching, this is a concert, not Sunday Mass.
I liked “October” era U2 the best – you know, before they got all stadium-rock big 🙂
I think Neoreactionary is being conflated too much with “conservative/right wing people who aren’t dumb”. I think the latter group is pretty significant here but it doesn’t seem like the former is that strong, of course I am not great at spotting them because I still don’t totally get what they’re all about.
One of my pet peeves about this blog is that half the time when Scott discusses some more or less ordinary conservative idea or thinker, he labels it/them “(neo)reactionary.” I basically never see anyone talk about neoreaction outside this blog (and I read iSteve). Even if right-wing LW folks are more likely to embrace that label than “conservative,” unless we’re explicitly talking about bringing back a monarchy or something, “conservative” is generally a better label.
“but the lefties are almost all very moderate and a substantial fraction of the righties are really far to the right.”
The problem with such a statement is that it assumes a clear definition of “moderate,” which is one of the things people with different ideological views disagree about.
To take an example from a recent thread, a number of people seemed to agree with a statement along the lines of “neither communism or U.S. style capitalism works very well.” My guess is that at least some of them considered their views moderate–after all, they were being negative about both alternatives.
From my standpoint, they were revealing either a startling ignorance of the historical facts or a commitment to left wing ideology verging on lunacy. Communism’s “not working very well” consisted of several of the most murderous regimes in history and economic policies that kept well over a billion people poor for decades. Capitalism’s “not working very well” consisted of the greatest increase in material welfare, for poor as well as rich, that we have evidence of—while failing to produce the best outcomes that anyone could imagine.
One could, of course, reverse the point. From the standpoint of some of them, my views may well appear extreme, perhaps insanely, right wing (if that includes libertarianism).
One solution is to define “moderate” and “extreme” not in terms of congruence to truth but of position in some existing distribution of views. But most of us will have our perception of that distribution badly distorted by the particular bubbles we are living in.
Your example really reminds me of this quote:
“Under capitalism, man exploits man. Under communism, it’s just the opposite.” (John Kenneth Galbraith)
I think most members of the Blue Tribe would easily agree with it. Somewhat in their defense, I might point out that capitalism, especially “U.S. Style Capitalism”, pretty much screams Red Tribe (and maybe Grey).
What most members of the Blue Tribe would argue is that while capitalism is very good at creating wealth, it does poorly making sure that the distribution of wealth is fairish on it’s own. Most members of the Blue Tribe are not state socialists calling for communal ownership of the means of production. We just don’t think that the completely unregulated market will work as advertised based on our reading of history. We also think that some things are better provided by the non-market for a variety of reasons.
Define “fairish.”
My comment was not about people who think that the U.S. version of capitalism can be improved. I agree with them, although my improvements might be in a different direction than theirs. Also my non-market mechanisms.
It was about people who wrote as if the U.S. version of capitalism and the Soviet/Chinese/Khmer Rouge versions of communism were about equally imperfect.
It’s a weird kind of historical got’cha that blames one “side” of a debate for its death camps and praises the other for reaping the benefit of advances in industrial machinery. It’s also a weird kind of got’cha that uses the widest form of Communism (all those dictators and systems were definitively all communists!) solely in the negative while keeping the narrowest possible capitalism solely in the positive (What? The Herero Wars weren’t capitalist!)
Further, where do you set the historical cut off? Apparently anti-communists (from that last thread mentioned) get to use every death from every sort of monstrously terrible thing to point out how absolutely atrociously bad some countries were managed. So can I be glib and ask exactly how many native americans had to die for “American” style capitalism to be viable? If that’s a faux paux by reason of historical cut-off, okay, sure, but then what was the total death toll from the scramble for Africa again? How many wars are we counting? Fillipino? Opium? Vietnam? Is WWI something I can cite as at least a little bit influenced by the underlying assumptions of capitalist markets and the constraints of competing capitalist empires?
If people are so insistent on mentioning how “communism leads to the most murderous regimes in history!” can I at least ask that they be honest? The reason Communism is apparently such a boogie man in some circles is that communists mainly killed their own people in various either banal or imaginative ways.
Good capitalists, because they aren’t idiots, understand the proper employment of market externalities and so preferentially go about killing someone somewhere else.
The only kind of startling ignorance of historical fact is the one that claims that either system hasn’t resulted in the somewhat unfortunate demise of millions, and I’d like to point out that perhaps tallying up mountains of skulls to prove The Other Side is worse than yours is a bit of a wasted effort when you’re still dealing with mountains of skulls. So yeah; communisms “not working very well” resulted in the Cultural Revolution and the Great Leap Forward and the Holodomor – did capitalism’s “working as intended!” lead from the stockmarket to Black Tuesday to the National Socialist take-over of Germany? If one is purely attributed to the professed ideology of the leading members of the country, is the other?
On paper, both A and B sound wonderful. In practice, sometimes bad things might happen with both. I’d suggest that “acknowledges both types have some problems” is a pretty good definition of a moderate in these debates.
At the risk of speaking for Dr. Friedman, I suspect one of his arguments against treating communism and capitalism with the same principled balance is that the failure modes of communism stemmed from it being communism, while the failure modes of capitalism stemmed from it being extended with various structures of centralized power with monopolies on the use of physical force.
I could be cheeky here and say that the sense of “moderate” you describe above would probably feel to a free-market advocate like saying that a mixture of 75% dreck and 25% honey and one of 75% honey and 25% dreck are both proven to taste bad, so we should compromise and seek a mixture of 50% dreck, 50% honey.
(But again, that feels meaner than I wish – I think you were wise to try to note failure modes.)
The trouble here, as has been pointed out repeatedly elsewhere in these comments, is that communism is a deliberately constructed socioeconomic system with a specific ideology behind it, and capitalism is not. It has a specific scope and meaning in a Marxist context, but these days even self-avowed Marxists talk about “late capitalism” (a term that I’ve always found hubristic, but never mind that) being a thing with traits that Marx himself never discussed, so I don’t think that’s something that we can blindly expect to lead us to results that make sense. To say nothing of the question-begging aspects of picking up Marxist framing when we’re talking about communist body counts.
So no, I don’t think there’s really a principled way to compare communism against capitalism, because capitalism is underspecified. We could compare it against 20th-century liberal democracy or social democracy (the best candidates as direct competitors, IMO), where in both cases it ends up looking rather poor; or we could compare it against early modern imperialism, where it still looks bad but both sides look at least capable of producing piles of skulls big enough to be called “mountains” without embarrassment; or we could compare it against the various third options that started popping up in the Thirties, many of which ended up looking at least as bad as Marxism-Leninism did. Or we could compare it against systems that have never existed except in theory, but I don’t think that could possibly avoid turning into a circlejerk.
This severely understates both the coherence of liberal ideology and level of active thinking and planning required to create the kind of frameworks liberals sensu lato have pursued, and the degree to which institutions in state socialist societies were the result of improvisation, trial-and-error, and compromise between important power blocs.
I don’t think it does. Note that I explicitly put 20th-century liberal democracy and social democracy on par with communism (by which I really mean Marxism-Leninism and its relatives, as actually implemented) as socioeconomic systems, down in that last paragraph; I just don’t think we can say the same for “capitalism”.
Now, 90% of the time, when people talk about capitalism as compared to communism they’re talking about liberal democracy as implemented by the US and some of its allies during the Cold War. I think that’s a fair comparison as long as it stays suitably bounded. But that underspecification gives leeway to pull in all sorts of other shit both in support and in opposition, and that’s what I’m trying to avoid.
“use every death from every sort of monstrously terrible thing to point out how absolutely atrociously bad some countries were managed.”
Point- at the same time, it seems a bit unfair to gloss over the fact that literally every country that because ideologically communist, across a wide variety of pre-existing cultures, seemed to end up with purges, repression, shortages….
And what’s more- not to be all No True Scotsman- but Marx is pretty clear that violent revolution was a big part of his schemes, as was control of the proles until they were smart enough to be socialist on their own.
“So can I be glib and ask exactly how many native americans had to die for “American” style capitalism to be viable?”
Not that many, honestly, and over a long period of time- most of the post-columbian die off was due to disease, and I’m sorry, but we can’t really blame people who didn’t know what germs were for that one. Further, the high-end estimate of pre-columbian North American populations tend to be overstated.
Reviewing the death tolls of the Indian wars shows a large number of atrocities on both sides over a period of hundreds of years. And again- I find it difficult to get too exercised about this more than, say, the Pawnee and Sioux killing the hell out of each other. Do those wars go into the capitalist account?
“but then what was the total death toll from the scramble for Africa again? How many wars are we counting? Fillipino? Opium? Vietnam?”
Notably, the Vietcong killed about a million Vietnamese….right after we left. There’s a very solid argument that we were LOWERING the death toll by being there.
The scramble for Africa had relatively low casualties, and again, mostly attributed to things beyond the control of even the most wicked colonialist- sleeping sickness and smallpox killed lots and lots of people. Leopold, the only really Super Evil Wicked Person, only had the Congo Free State for a little while before it was taken away and turned into a pretty nice place to live.
“Is WWI something I can cite as at least a little bit influenced by the underlying assumptions of capitalist markets and the constraints of competing capitalist empires?”
Nooooope. Not even a little bit. The start of WWI was based around a reasonable fear on the German side that everyone else was scared of them and was literally going to choke off their trade, and a reasonable fear on the Allied side that the crazy Prussians were going to invade France to dominate Europe.
Ultimately, WWI was just another one in the endless cycle of European wars stretching back to……pick a date after Rome fell, honestly. France, England, Spain, (Various Germanic States) and other players all playing for control of Europe. It was no more an indictment of capitalism than it was of Monarchy or democracy. It’s only notable because technology made it so deadly.
And all this has to be laid against….how many communist dead? 80 million in Maos china, 50 million in Stalins USSR? This ignores the many, many later deaths in those regimes after those dictators died- one million in vietnam, 3 million in Cambodia. Another 100K by the Stasi alone.
I mean, capitalism simply isn’t in it with communism. I don’t think we should gloss over other deaths, but they are not comparable.
CJB: my god, pure ideology. Ever heard of Lord Lytton? Ever heard of the goddamn Opium Wars? How about all the divide-and-conquer rule in the colonies, so blithely trading hardness for fragility?
This is inaccurate. Belgian Congo became a fuck-up due to multiple factors (tenuous claim to the area, distance and low-tech communication delays, corruption, lack of oversight) that were not Leopold’s character – I haven’t actually heard anything especially bad about him as a person (except his liberalism, I guess). Since he was the ruler of that realm, he is ultimately responsible, but the outcome wasn’t due malice, but rather more due to incompetence.
@AngryDrake – Malice was clearly involved to a horrifying degree, and as you note, he was the one ultimately responsible. Who set the system in place? Who was managing it day to day? one of his ministers, I’d presume? Was this minister subsequently jailed and executed for his crimes? If not, I’m pretty comfortable calling Leopold super evil.
Comparing with 20th century liberal democracy gets you, amongst other things, the French and British empires, Canada and Australia’s stolen generation of indigenous people, and the USA and South Africa’s racial segregation, for a start. And that’s if we blame the Cold War entirely on the Communists and India’s wars with Pakistan entirely on Pakistan and so forth.
I think liberal democracy is the least bad system of government humanity has tried so far, but that’s not a very high bar.
The British empire was winding down by then; the Rhodesian Bush War falls into that period, as does the Suez crisis and the violence surrounding Irish and Indian independence, but those all have fairly minor body counts compared to some of the stuff in the previous century. There were a number of significant French colonial conflicts after WWI, though, such as for example in French Indochina and Algeria.
I wouldn’t count any of the proxy wars during the Cold War toward either side.
I’m not trying to say the 20th century liberal democracies were some kind of paradise, but the problems they had in no way seem comparable.
Tracy W – “Comparing with 20th century liberal democracy gets you, amongst other things, the French and British empires, Canada and Australia’s stolen generation of indigenous people, and the USA and South Africa’s racial segregation, for a start.”
Tally it all up. I think if you do, you’ll find that Communism is still so far out ahead that the comparison loses meaning.
Canada allowed/used religious organisations (whichever phrasing you prefer) to take indiginous children away from their parents and raise them in abusive environments, resulting in a hideously high mortality.
The Khmer Rouge declared that anyone who wore glasses was a class enemy who should be executed. Their phrase for anyone who’d lived in a city was “to preserve you is no benefit, to kill you is no loss.”
There is a *fundamental* difference between the two.
@ Nornagest,
I tend to agree with you about the proxy wars (and ftm all or most proxy wars). My inner cui bono says from the US, some political powerful industry wants to control a supply of bananas or oil or whatever. They find some cause in the relevant small country that will attract support frm USians, inflame it till the natives are fighting each other about it, then come in on the more attractive side. In the Cold War proxy wars, there was no need to find a real local cause; both the big empires had the announced cause of defending the world from the other empire. Getting the locals to fight was a matter of offering support to some existing local insurgency or local loyalist group.
Disclaimer: I first read your comment looking for the “Rhododendron Bush War”. Were the R’s the combatants, the spoils, or allies against George W? And/or, was it fought in the Australian or some other bush?
https://en.wikipedia.org/wiki/Rhodesia
To the best of my knowledge, the only war that ever took place in the Australian bush was the Emu War (And how I miss when that article had the war-details-box-thing on the right!)
@Nornagest: The British left India only post-WWII, so that’s over 45 years of “winding down”.
The French were even later, eg the French colonial wars in Algeria and Vietnam were 1950s. That’s about half of the entire 20th century.
I agree that the liberal democracies of Britain and the USA did some far more appalling things in the 19th century too (France wasn’t a liberal democracy for large chunks of that time). And other countries: I omitted the Belgain Congo from my list of 20th century liberal democracy atrocities, but if we are going back to the 19th century you should include that too on the list of atrocities to be laid at the feet of liberal democracy. Then there’s Canada and Australia’s liberal democratic treatment of their indigenous peoples.
But you are the one who specified 20th century. And you’re also trying to downplay the French and British empires during that time frame. This strikes me as cherry-picking. If you want to compare capitalism to liberal democracy, do it over the same time-frame for both.
You can’t properly include the Belgian Congo on a list of 19th-century liberal democratic atrocities, because the Belgian Congo didn’t exist until 1908. And the Congo Free State, which did exist in the 19th century and was indeed atrocious, was not subject to the jurisdiction of any liberal democracy – the nations of Europe, specifically including Belgium, had by treaty granted the Congo to one Leopold Victor, private sovereign individual, specifically independent of his role as King of the Belgians.
It’s almost as if the people at the head of Europe’s governments in the late 19th century were nostalgic for absolute monarchy and wanted to make sure that government of the king, by the king, for the king, should not perish from the Earth even if pesky liberal democrats were pushing it out of Europe.
@Schilling: good point on the wording. I think however that liberal democracies such as Belgium thought, in the 19th century, that awarding a far-away nation as a private kingdom, regardless of the opinions of the inhabitants, was a good idea, is pretty appalling. Like selling someone into slavery, you do bear some moral responsibility for the consequences. It’s not like there hadn’t been numerous atrocities under colonial rule already by then. (I’m a NZer and one of the motives for the Waitangi Treaty in 1840 was an attempt by the British government to set up a structure so colonists would treat Maori better than Native Americans or Australian Aborigines had been treated.)
If you are going to attribute one of the atrocities done by liberal democracies in the 19th century to leaders nostalgic for monarchies and thus implicitly not a liberal democratic atrocity then the question arises why not extend similar explanations to any particular atrocity done by private capitalists?
By the way, there’s an (imperfect but noticeable) correlation world-wide between capitalism and liberal democracy. Which says nothing about causality of course, but does make it difficult to contrast the two like Nornagest wants to.
I’m comparing communism to liberal democracy. I think liberal democracy is a type of capitalist system (as the term is typically used), but that “capitalism” is too large and amorphous a word to meaningfully compare against something as temporally and ideologically bounded as Leninist-style communism; so I’m trying to narrow it down a bit in the service of apples-to-apples comparisons. The reason I specified WWI is that that’s when the Russian Revolution happened. “20th century” is imprecise, but I thought it should be close enough given that I’m really interested in a period from 1917 to 1992 or thereabouts. A consistent time frame is precisely what I’m trying to establish.
Yes, the British were present in India until 1945. The French were present in Indochina until 1954 and in Algeria until 1962. The “winding down” I mentioned was less about time and more about emphasis: after WWI, few of the European powers spent much time or treasure on expanding their colonial ambitions.
It’s also way too weak a gotcha. Capitalism created the advances in industrial machinery, and is the most effective system at reaping them.
I suppose that depends on what you mean by “American” style capitalism. If it’s whatever sort of capitalism that takes place on the geography we refer to as America, then the answer would be zero, in an alternative world history the Native Americans could have built their own capitalism like Sweden, Denmark, Switzerland, Taiwan, South Korea and Singapore did. The same could be said for American-style democracy.
Adam Smith made some good arguments back in the 18th century that empire-building policies not only were terrible for the colonised, but also made Britain worse off than if Britain had instead engaged in free trade with politically independent countries. And Britain, France, Spain, Japan and the Netherlands are now richer than they were when they had colonies, which supports his argument.
So, basically, colonialism: terrible thing, retarded capitalism.
Failing to produce optimum outcomes is a reasonable interpretation of not working VERY well, since producing optimum outcomes is a reasonable interpretation of working very well. You are attacking the comment as though it said something like capitalism and communism are equally bad, which it didn’t, exactly. Saying that current US capitalism could be improved on is not dumb or extreme.
No, it isn’t. Producing optimum outcomes is a reasonable interpretation of working PERFECTLY.
As the saying wisely warns, don’t make the perfect the enemy of the good.
Also, don’t engage in the continuum fallacy.
In case anyone else was confused like me, “figure-ground” is a category of optical illusion using background/negative space, like the famous “face/vases” image.
https://en.wikipedia.org/wiki/Figure%E2%80%93ground_%28perception%29
I wouldn’t say the comment section here leans way way way to the right, but the political discussions do seem dominated by people who don’t like the left. This could easily be discouraging leftist readers from participating in political discussions here.
>I despair of ever shaking the label of “neoreactionary sympathizer” just for treating them with about the same level of respect and intellectual interest I treat everyone else.
Well… good luck with that one. Most people on either side of the political spectrum see people saying that blacks are genetically inferior and women’s rights were a mistake and say “Wow, fuck those assholes,” not “Let’s hear them out.” I’m not accusing you of being a neoreactionary or even a conservative, but the fact that you see their ideas as worthy of engagement shows that you’re more sympathetic to them than most people are.
>I despair of ever shaking the label of “violently obsessively anti-social-justice guy”
That one’s pretty unfortunate. You developed that reputation by making some good posts heavily critical of social justice as a movement, but I don’t think your views on social justice itself are very far from the center.
As someone that took the dive down the rabbit hole of NRx/DE material following Scott’s posts, I would just like to offer my opinion that your above characterization of NRx is roughly equivalent to SJ=”DIE CIS SCUM”, ie misrepresentative.
In the same way that a Gender Studies academic can go “I think gender roles are socially constructed and should probably be gotten rid of” and it doesn’t necessarily mean that person hates straight cis people, an NRx who believes that racial and gender differences in IQ, social roles etc have a genetic basis isn’t necessarily a hateful asshole who thinks anyone is “inferior” in any real sense.
(Note: Not an NRx, somewhat sympathetic to certain aspects of it, from the inside feels like I’m trying to do my bit for charitable debate but may actually be acting to defend a distantly related tribe from a perceived enemy???)
The giveaway is conflating an object-level question (are there genetic racial aptitude differences?) with a normative question (should women be allowed to do xyz?). Requiring belief in certain object-level claims about the world in order to be considered a moral person, regardless of the evidence for or against them, is the hallmark of a religion.
To be fair the commenter does not state that he shares this view, only that it is common in society as a whole, which is perfectly true.
Thinking that a group is inferior is a normative judgement. “X group is inferior” is not the same thing as “there are genetic differences in y.” Perhaps your characterization is the more accurate one in terms of capturing peoples’ beliefs. But I do not think the comment is doing what you’re accusing it of doing.
The commenter referred explicitly to genetics, so he is not talking about some vague moral sense of inferiority, but rather lower physical capacity. I did not repeat the word because of the distinction you are making, but I think the original commenter used it deliberately to conflate the two, while actually attacking (or pointing out that others attack) the object-level belief. Perhaps I am wrong, but if so he should clarify.
I think race also is a complete red herring in this respect. Rhetorically, it is a useful third rail to lay in front of one’s opponents, but leftists deny any significance of heritable genetic differences within races too.
True, and by and large Nrx don’t make the claim x is inferior to y *without regard to some metric* with evidence behind it. Evidence that could be used to explain other theories, sure, but evidence nonetheless.
@Randy,
Forgive me, I could not parse that.
Saal:
I meant, you are more likely to read something like “Africans have intelligence on average lower than other races, jews and asians higher, as shown by IQ tests over the last 60 years and PISA scores” than you are to read “Africans are less important human beings.” The evidence cited could be used to support a “look at the effects of racism and colonialism” thing, so people apt to blame racism will look at those coming to any other conclusion as motivated by racism, but I don’t find that fair.
The problem is that non-religious modern people will tend to note (rightly) that in the world currently intelligence matters a lot more than other attributes–a smart person is literally worth more in a modern economy in the same way a hercules was worth more in a pre-modern economy), and so, depending on other axioms they hold, be unwilling to consider differing means in intelligence by groups, or bite the bullet and find some groups simply “more human.”
Note I said tend, and there are other justiifcations for universal human worth than Christian values or literal equality, but they’d tend to be more sophisticated and arbitrary.
That last paragraph was a bit of a tangent, though.
I think saying “we aren’t calling them inferior, just less intelligent” is disingenuous given that everyone is working in cultural context descended from Plato and Aristotle.
@ alexp
I think saying “we aren’t calling them inferior, just less intelligent” is disingenuous given that everyone is working in cultural context descended from Plato and Aristotle.
Can you expand, please? Esp what you mean by “everyone”?
alexp, I’m not sure, but I think the traditionalist counter-argument to that is “judging people’s humanity by their intelligence instead of their moral worth is dehumanizing, modern, and wrong (but I repeat myself)”.
Scott posted something similar about the constant feeling of guilt and worthlessness this causes in “The Parable Of The Talents”.
He also pointed out that intelligence is one of the only traits we insist is self-determined rather than “gifted” for the purposes of “moral calculus”.
@alexp:
I’m not confident just how your average NRx measures “human worth”, but I would imagine it ties in with the concept of Noblesse Oblige. IE, everyone in a hierarchical social system has a role to fill, with obligations both to those above and those below them in social status, and your worth as a person (inferiority/superiority) is tied directly to how well one fills that role. So the peasant (or, in this day and age, service worker?) is measured not according to their raw intelligence, but as a function of their diligence, respect for authority, etc. The “ruling class” or aristocracy, on the other hand, ought to be judged based on their intelligence, but also on their caring for those beneath them in a patriarchal faction.
I think this is more in line with the inherited Aristotelian cultural context your speaking of anyway. After all, Plato talks about philosopher-kings, not philosopher-streetsweepers, but I don’t recall him ever making the argument that everyone else is invirtuous.
“I think saying “we aren’t calling them inferior, just less intelligent” is disingenuous given that everyone is working in cultural context descended from Plato and Aristotle.”
Sadly, though, that isn’t enough to make it untrue if it is true, so we had better be prepared to either find another cultural context, or burn a lot of honest truth-seekers and lying heretics. If the object level evidence supports it, of course.
Luckily this is the most convenient world, right?
@Saal, I don’t take what Plato says about philosopher kings very seriously, but he definitely argued that knowledge and virtue were the same thing. Also, while he very occasionally made room for referring to a kind of cleverness that some might have which could lead them astray, in general he didn’t make much in the way of fine distinctions between categories like knowledge, wisdom, and intelligence.
I think you have a better chance with Aristotle, but while Aristotle is more willing to accept that there are different kinds of virtue, he still definitely ranks wisdom as the most important one.
@Protagoras:
I defer to those who have read more Plato/Aristotle. Would you not agree, however, that Platonic and/or Aristotelian “cultural context” has pretty much almost always only been a thing for elites? It seems fairly uncontroversial to say that most Western civilizations haven’t really expected the masses to cultivate philosophy as a virtue.
IDK, alexp’s allegation is vague to begin with, so I kind of feel like I’m fighting my way out of a paper bag here by trying to mount an effective counterargument.
So here’s how I’d say NRx tends to view racial differences, based on reading a lot of NRx:
There is a LOT of evidence, on any number of metrics from test scores to Raven IQ tests, to income to violence levels to time-preference to planning to whatever else that there are significant, population level cognitive differences between the Racial Groups.
From a societal perspective, this means that throwing ever more money at all black schools is unlikely to work, as, indeed, it has not (CF Baltimore and Camden per-student spending).
That, however, isn’t really all that important- the important thing, and the thing that gets lost in these discussions, is that population level statistics can tell us lots about relatively unimportant things (How will these TEN MILLION PEOPLE perform?) and nothing at all about actually useful things (How well will John perform at this job I’m hiring him for?)
I’d also point out that intelligence gets rather fetishized, unsurprisingly, by smart people, and that it isn’t necessarily linked to greater happiness or greater utility, but that’s a whole nother kettle.
And what’s more, we can look back and note that the black community was not always thus- it’s doubtful that black people were any smarter in 1950, but there’s no question they were doing better.
(The great irony of modern society is that reviewing the data on black people circa 1950 shows a population about to explode and grow as soon as these stupid restrictions were removed. Which they did- until welfare.)
So therefore, what can we conclude? Well- black people are less likely to be able to grok vector calculus. Neither can I. They’re more likely to show a low time preference- so do many people. This doesn’t mean we should bring back lynching and Jim Crow.
Instead, what most NRx seems to want is- let nature take its course. Black people deprived of the massive amounts of welfare permitting the current dissolution will be forced back into stable family units by natural forces, cutting down on crime in the bargin, and creating more wealth, more business ownership- all the various benefits of stable family units that already existed in black families back in the day.
Segregation will happen on it’s own. Most studies- and most places seem to demonstrate that blacks and whites that meet at the store, or at the gym, or in school get along just fine- but they prefer to live with other black or white people. This isn’t really a big deal until someone breaks out a noose. Little daily acts of self-segregation eventually settle into two populations mostly doing their own thing.
But the problem is right now we’re chasing this bunny of complete equality and a perfect melting pot based on things like “calculus test scores” and “engineering jobs” that simple are not going to be equal.
And then you get into marriage, and schools, crime and The Family and the drug war and all these entangled issues- but that is, in general, the “average” NRx view “they aren’t as IQ sharp, and that has certain effects, but it doesn’t mean it’s time for a race war.”
In general, the tone is more that ignoring the shockingly high black crime rates is a way for the Left to destroy the family, social trust, and bring down the White Man. Make of that what you will.
I might be conflating HBD and NRx beliefs a little here, but it seems that almost everybody who subscribes a to the latter subscribes at least a little to the former.
First, NRx doesn’t just contend that some races are less intelligent for genetic reasons, but also have less respect for authority, less impulse control and are more prone to violence.
But regardless of that, I’m just saying that I find unconvincing the contortions that some in the NRx go through when they say that less intelligent doesn’t mean inferior. It’s like when they say that women aren’t inferior, they just belong in a role subservient to men.
Some may actually believe that, in a religious, or quasi religious sense, all humans have equal moral worth, or at least moral worth independent of their capabilities. I support that idea, on some level, but until we get some sort of universal dole, and even then, it remains a useless platitude in the material world. People value other people, outside of a small circle of close friends and family, for what they can do for them. This might be offensive and wrong, Eggo, but it’s certainly not modern.
Do I have the same moral worth as Elon Musk or Terrance Tao? That’s a question entirely outside the scope of an comment thread on a blog. But if there’s only room for two in the last lifeboat, you almost certainly wouldn’t vote for me to take it.
The point about Aristotle and Plato was more of an aside. It was meant to imply that the deifying of reason and wisdom isn’t some modern invention. Now even if you haven’t read a single word of any philosopher ever, your thinking is almost certainly within the context of the intellectual tradition that they passed to later Greeks who passed it to the Romans who passed it to early Christian philosophers like Augustine and then medieval philosophers like Aquinas. At this point, you not recognizing that you’ve been influenced by them is like a fish not noticing that it’s in water.
Sorry If I’m being incoherent right now, but it’s late.
Again, I think they’d argue that “what you can do” for friends and family is far more dependent on moral character than on “intelligence”, and a civilization that pressures people to pursue the latter and ignore the former will destroy itself and everyone in it.
One reason that the right is so suspicious of intellectuals is how many smart people seem to spend their Sundays arguing over vegan options at a fringe convention, while the plumbers take their families to church and a picnic.
And, to take another example from this thread, intelligent people are much more likely to “reason” that killing children is more ethical than eating a hamburger.
I’d definitely say the unhealthy worship of raw intellect is quite modern–almost as new as our obsession with beauty. Certainly many other attributes were once given equal footing with intelligence, and intelligence without phronēsis/prudence/wisdom just meant you were an unlikeable-smart-ass/potential heretic.
“…all humans have equal moral worth, or at least moral worth independent of their capabilities. I support that idea, on some level, but until we get some sort of universal dole, and even then, it remains a useless platitude in the material world. ”
Useless? Nonsense. It is the justification, motivation for behaviors ranging from charity to civility. I’d argue it is the residual effects of such philosophies that make EA’s care about saving starving Africans rather than, say, buying books for the kids of geniuses.
>I find unconvincing the contortions that some in the NRx go through when they say that less intelligent doesn’t mean inferior. It’s like when they say that women aren’t inferior, they just belong in a role subservient to men.
Well, they have to, because inferior is a super negatively loaded word. As you say, if one claims that HBDs/Neoreactionaries/Whoever is claiming blacks are inferior, then we have to accept that we live in a world full of superiors and inferiors (N-1) and then either:
1) Everything is super awful
2) We come to accept that “inferior” is not so bad, and therefore the NRx argument is not so bad
3)It is possible to be less intelligent and not be inferior, and then the NRx argument stops being some wordmixy rethoric.
Besides, even if the HBDers are right, that doesn’t mean “Blacks are inferior”, but rather “Blacks are a group that contains a a higher proportion of inferior people”, it’s far from a race essentialism argument (I think, I’m not actually very familiar with the theory).
“First, NRx doesn’t just contend that some races are less intelligent for genetic reasons, but also have less respect for authority, less impulse control and are more prone to violence.”
Some do, yes- I’d say that gets tangled up into a whole lot of aspects of “culture” as opposed to “race.”
“But regardless of that, I’m just saying that I find unconvincing the contortions that some in the NRx go through when they say that less intelligent doesn’t mean inferior.”
I’m smarter than a lot of people. I’m pretty sure I’m smarter than my mother. I’m also an often emotionally callous jerk who has mistreated a lot of people, trampled on the feelings of many others by accident, is often loud, boorish, and rude and so on.
But yeah, I read Cormac McCarthy and she reads Agatha Christie. Wooooo my superiority.
The simple fact is that where it really matters? Like- actually matters to life? Intelligence is worth very little. It makes it easier to get high paying jobs. Doesn’t make you nicer, kinder, a better parent, a better worker, a better friend.
Any IQ under about 150 is nothing more than a conversation piece that affects how quickly you read and how good you are at math. I suppose it might help somewhat at computer programming, but I have my doubts as to how much pure IQ benefits you there.
“Do I have the same moral worth as Elon Musk or Terrance Tao? That’s a question entirely outside the scope of an comment thread on a blog. But if there’s only room for two in the last lifeboat, you almost certainly wouldn’t vote for me to take it.”
Last lifeboat off the Titanic? Last lifeboat off a dying earth?
Either way, I’m guessing Alexp is probably a dude? So to all three of you- no. Stand and be still to the Birkenhead drill. Women and children first.
“The point about Aristotle and Plato was more of an aside. It was meant to imply that the deifying of reason and wisdom isn’t some modern invention.”
Christianity also had some major focus on things like works, humility, charity, kindness, The Great Chain Of Being and so on that took it in a very nongreek direction over the centuries.
Obviously, varying results- but it ended up with a society that had a protestant work ethic, so not all bad.
It is also late, and I’m just dashing this off before bed, so please pardon my incoherence as well.
I am receptive to the position that there are quite likely racial differences in average IQ, but that this has no implications for morality or policy.
I’m skeptical, though, that this is really the position at issue. I base my skepticism on the fact that racial IQ differences keep getting brought up, over and over, on comments here. It’s a recurring theme, one of the community’s pet topics.
Why would people want to keep reminding everyone of a topic with no implications? What possible interest could there be in revisiting a moot point again and again?
IMO the behavior doesn’t align with the stated view. And I think I believe the behavior more than the statement.
AntiDem’s “What If HBD Is True?” is a pretty good article on race written by an actual neoreactionary. It touches on many of the issues discussed in this thread.
Who says it has no implication on policy? It has huge implications on policy.
Animal-rights activists can claim that a chicken with a brain the size of a peanut has the moral worth of a human being, and nobody doubts their sincerity or accuses them of harboring an ulterior motive. We disagree at the object level, maybe countering that a chicken has one-tenth the moral worth of a person or that there’s some threshold of self-awareness that a chicken doesn’t meet, but we don’t dismiss the equal-moral-right argument as logically impossible or obviously hypocritical.
Then someone suggests that certain racial groups might on average be shy a few IQ points, and they must be planning to reinstitute slavery or some such because they can’t possibly believe in moral equality in the presence of material inequality?
Regardless whether people choose to map intellectual capacity onto normative moral value, intellectual capacity is still an objective quantity and not a normative one.
Worse, if people map (their normative assessment of) moral value backwards onto intellectual capacity, then their assessment of object-level intellectual capacity is likely to be wrong, because nature doesn’t obey the laws of human moralities.
And this is important because left-ideology is largely based on the belief that economic inequalities cause all other inequalities. If in fact economic inequality is just a trailing indicator like they believe crime, low educational attainment, etc. to be, then the left’s policy prescriptions all get exploded.
” everyone is working in cultural context descended from Plato and Aristotle.”
Nonsense, some of us are working in a cultural context descended from Christianity.
“For religion all men are equal, as all pennies are equal, because the only value in any of them is that they bear the image of the King.” G. K. Chesterton
And that is the only sense in which the equality of man makes any sense. That some people are less intelligent than others is readily observable — elementary school kids notice it — and whether the distribution is even across races are not, it is all that is needed to be fatal to a concept of equality based on intelligence.
@ CJB
Do you have a link supporting your assertion about black Americans doing better in the 1950s? It’s something I would like to write about, but without a source it’s difficult to. Both you and Mary mention it, so I figure it must come from somewhere.
I spent some time researching this and the data seems to suggest the opposite. In this census report> (table No. 1427) the real median income for black families more than doubled between 1950 and 1997. The ratio between black and white incomes also increased from .54 to .61. Not a huge change, but certainly not worse.
eh.net has data on wages and schooling (tables 4 and 5). It uses nominal dollars, but the ratio follows the same trend above and there is a decrease in the difference in schooling.
This paper by two Vanderbilt professors looks at home ownership. Table 1 is missing data for 1950, but for 1940 and 1960 it is 23% and 39%, respectively, while in 1990 it’s 45%.
Maybe the rate of change is different in the 50s (although, once again, where did you read this?), but the 50s were an anomalous decade for everyone (e.g. it had the lowest level of income inequality in the 20th century).
This got long, so I’m not sure who I’m responding to. But allow me to explicate thus: I’m not a NRX or whatever, I’m a minarchist and classical liberal. I believe absolutely in the equivalent moral worth of all humans (modified by their individual actions) and in striving for equality before the law (a futile struggle, given the effects of things like wealth and attractiveness, but useful nonetheless).
This is my philosophical stance. In my reading of the scientific literature and observation of real life, it quickly becomes glaringly obvious there are differences between groups (cultural, language, regional), and these groups often map onto racial lines. But these are questions of ability, completely agnostic with regard to moral consequences. And as others have stated, group differences tell us nothing about individuals.
The word “inferior” is loaded, and not useful I think, but if it is insisted upon, the question is always “inferior at what?”. We are talking heuristics here, generally. If you are picking for a basketball team, your choices are going to be different from picking for a chess tournament. There is no moral opprobrium that should be associated with being better at croquet than physics. But it does mean that if group X is better on average at croquet and group Y is better at physics, when you look at the very end of the distribution tail (professional croquet players and ivy-league physicists) you’re going to get very large group-based skews, and this is fine. It is only by investing one human attribute with moral value (intelligence, usually) that we get a problem, and this makes no more sense than to invest height with moral worth.
This brings a lot of people who loosely fall under “HBD” into conflict with the dominant political narrative, which is that all human beings are perfectly equivalent at all endeavors, and any statistical skew of any sort is the work of evil white male patriarchs. Of course, this also means that anyone with a modicum of scientific honesty who will talk about it gets lumped in with the people who do actually think black people are morally inferior or some such idiocy. I may not like being on the same side of an issue as some of the NRx people, but I can’t deny reality to be more socially acceptable. Just not in my nature. And I will not be made to renounce the evidence to avoid false associations made by others.
@ Tarrou
That’s a refreshingly civil comment, so I’ll try my best to engage constructively.
I don’t think people are primarily worried that “HBD” ideas might lead to black people being excluded from chess (although that would be unfair to potentially good black chess players, of course).
Intelligence is not just an arbitrary trait like height or eye color. Differences in intelligence, perceived and otherwise, have already been used to justify giving the “superior” people power over the “inferior” people (“for their own good”, of course), and such arrangements have repeatedly resulted in a lot of what we now recognise as terrible abuse.
I don’t think Nita is disagreeing with Tarrou, and I’m reading it as an example of a call to pretend there are no differences even if there were. Is this reading wrong?
(I don’t know enough to have an opinion on whether there are)
@Nita
You’re absolutely right. I believe that many people, including Scott Alexander and many commenters on this blog, believe that the correct solution to this common failure is to make “intelligence” into just an arbitrary trait like height or eye color so that downstream abuse/enslavement/murders/etc. don’t follow from acknowledging real differences. They seem to believe that this is a better route to follow than to deny/ignore reality because of the fear that reality may be once again used to justify abuse/enslavement/murder/etc. of populations.
I must say I agree with this in principle, though I don’t know enough about the science of intelligence as it relates to genetics and races to say whether or not it’s relevant in this particular case.
@ Cauê
(WordPress ate my previous reply, so this is take 2.)
If differences exist, it doesn’t mean that there’s no discrimination. E.g., if research showed that women are (on average) “more nurturing” or men are (on average) less willing to take care of children, could we conclude that current child custody decisions are perfectly just, so MRAs should shut up and go home?
Also, a little while ago the belief that that black people are morally equal to white people (due to having souls, being made in the image of God etc.) didn’t protect them from being beaten or killed at their owners’ whim. So Tarrou’s idea that all we need to do is decouple intelligence from moral worth is not obviously true.
@ lvlln
If you mean “intelligence won’t matter when everyone has a standardized IQ of 300, and we should be urgently working on that anyway because intelligence is extremely important“, then yes, that’s what Scott believes.
And, well, intelligence does matter. I don’t see how everyone pretending otherwise would be more honest, more sustainable or more useful than the current situation.
What sets Scott apart from “HBD” folks is that Scott spends more time researching ways to increase human intelligence and less time raging about the mainstream view of group differences. This makes me a little skeptical about the motives of these brave defenders of Truth.
@ Cauê
(WordPress ate my previous reply, so this is take 2.)
If differences exist, it doesn’t mean that there’s no discrimination. E.g., if research showed that women are (on average) “more nurturing” or men are (on average) less willing to take care of children, could we conclude that current child custody decisions are perfectly just, so MRAs should shut up and go home?
Also, a little while ago the belief that that black people are morally equal to white people (due to having souls, being made in the image of God etc.) did not protect them from being beaten or killed at their owners’ whim. So Tarrou’s idea that all we need to do is decouple intelligence from moral worth is not obviously true.
@ lvlln
If you mean “intelligence won’t matter when everyone has a standardized IQ of 300, and we should be urgently working on that anyway because intelligence is extremely important“, then yes, that’s what Scott believes.
And, well, intelligence does matter. I don’t see how everyone pretending otherwise would be more honest, more sustainable or more useful than the current situation.
What sets Scott apart from “HBD” folks is that Scott spends more time researching ways to increase human intelligence and less time raging about the mainstream view of group differences. This makes me a little skeptical about the motives of these brave defenders of Truth.
@Saal:
Is race and IQ really a NRx thing, in particular? I’m not NRx, but it seems like a conflation to me.
I know there’s a lot of overlap, but when I hear “NRx” I mostly think “Bring back James II as CEO of America” whereas when I read about race and IQ stuff I mostly want to call that “HBD” or just “iSteve and friends.” And I don’t really think of, say, iSteve as a monarchist. (I mean, he just loves the Royal and Ancient game of golf, but that’s about as monarchist as he seems to get?) That being said, it seems to me that somebody could be all about the Stuart Restoration or appointing Elon Musk dictator for life, without having any particular interest in HBD. And as with iSteve, one could be all about the HBD without really giving a flip about monarchism or neo-cameralism or whatever.
This is an outsider perspective, but I see them both as part of the general Dark Enlightenment scene. Not every person in either part will necessarily accept all the views central to the other, and emphasis will vary quite a bit, but they’re substantially more likely to hold each other’s views.
Somewhere there’s a (rather outdated) connectivity map of Dark Enlightenment blogs floating around. I forget what it called the core reactionary “roll back the French Revolution” blogs, but they were one cluster on the map, and the HBD blogs were another, with sparser links between the two.
(Less Wrong was in the “techno-futurist” cluster. I was a little ticked about that at the time, but by the adjacency definition above it probably belongs there as much as iSteve does, even though the modal LW poster is pretty much on board with liberal democracy.)
Is that outdated? I’ve love to see an updated one that included twitter. There definitely appeared to be distinct clusters with interconnections at a “high” level on what people here are calling the “rational/irrational” scale.
You’re presumably thinking of this.
That’s the one.
The entire Dark Enlightenment seen seems to be a bunch of technically very intelligent people telling just so stories on why they should be the cream of the crop online. They take some parts of recent scientific discoveries they like, really misinterpret them, and turn them into an elaborate cosmologies in the same way that the Social Darwinism used evolutionary biology into a justification for colonialism, racism, and economic exploitation. Many of them would be Social Darwinists and into Eugenics if they lived in an earlier age.
“Bring back James II as CEO of America”
I’d prefer Charles II. I don’t know if he was a great king – his domestic critics seem to downgrade him because he didn’t engage in enough wars, which rather recommends him to me, but he seems to have been (a) a party guy which is understandable given his personal history (b) really remarkably tolerant of people making fun of him.
It’s been years since I read Cephas Goldsworthy’s biography of Lord Rochester (The Guardian didn’t like it, but I thought it was great fun if definitely written with axe-grinding in mind) so I’m going on shaky memory here, but there was an anecdote where Charles was in a place of low repute due to reasons, and nobody there knew he was the king, and somebody called him “a black visaged bastard” or an insult of that kind, and when it was revealed he was the king, instead of having everybody rounded up and flung in prison he just laughed it off.
@Irenist,
This is also coming from an outsider perspective but have a look at this: http://www.xenosystems.net/trichotomy/
TL;DR: NRx is a trifold alliance between ethnonationalists, religious traditionalists, and techno-futurists (hypercapitalists). So the ethnonationalism and to a lesser extent religious trad is where the HBD comes in. In addition, NRx is some sort of subset or something to Dark Enlightenment, which essentially consists of a variety of “red pills”, including:
1. Democracy sucks
2. Gender roles are a hardwired thing to a greater extent than modernists would like to think, and reproductive strategies of the opposite genders are zero-sum
3. Tradition (including strong religions, patriarchy, aristocracy and/or monarchy) is social tech which shouldn’t be thrown out–strong version of Chesterton’s fence
4. Group/ethnic/racial/something differences in IQ, aggressiveness, etc. are genetic, and in some DE interpretations relatively strong genetic determinism is a thing
5. (and this is the big one) Whiggism is head-in-the-clouds nonsense, giving more freedom/equality to people can only result in decadence and societal decay, humans NEED hierarchy, and plenty of it.
At least this is what I’m getting from what I’ve read. So HBD isn’t quite “central”, but it is what they’re called on most often. Which is weird, because the HBD isn’t nearly the most radical split from the mainstream among those “red pills” to my mind O.o just goes to show what a sacred cow it is I guess.
Edit: use of the term “sacred cow” was uncharitable and begging the question of whether Progressive racial and gender norms are informed by object-level facts or dogma. Leaving it there for a reminder.
Despite a non-negligible number of libertarians, Catholic conservatives, and neo-reactionaries, I expect that the median SSC commenter is left-wing. There’s a background of opposition to SJ here, but one can be left-wing and anti-SJ (see Gamergate).
I remain super baffled as to why EA pays so little attention to global warming. It’s a far more immediate and pressing x-risk than UFAI. I am not just saying this to diss EA; I would find lots and lots of discussion about what I can do as an individual, and how to balance that against other causes demanding of my time and money, very helpful.
The short of it is that lots of people are already looking at it, it’s unlikely to end the world even if it’s a problem, and there’s not much that an earning to give person can do about it even if they did.
But I know of no formal attempts to quantify this, and like half of a quarter of a person believes that it’ll be solved by geo-engineering / a richer populace anyway (I’m not saying that this partial person exists in the same brain by the way!)
But is it greater than 10^-18 likely to end the world?
Good question. I think yes, end of world from climate change is that likely at least. Climate models are bad enough that we can’t be so confident. But on the other hand so many interventions designed to reduce global warming cause catastrophe with at least that probability. Why waste time thinking about one-shot 10^-18?
If I were an animal rights EA, I believe the only appropriate response to this loaded question is “Moo”.
MicaiahC, comment of the day.
Of course, but so are at least 10^9 other things. If we’ve already got seven people working global warming, it’s time to start diversifying.
Or, alternately, work on global warming because things that aren’t X-risks are still worth working on.
@John Schilling:
I think your answer misses the point.
The objection was that global warming X-risk isn’t worth studying because “it’s unlikely”. This in response to someone who doesn’t understand why EA isn’t more concerned with AGW.
“It’s unlikely to end the world” makes no sense as reasoning for why EA would not be very concerned with climate change.
You raise a valid point that there are people already working on AGW. But there are already people working on malaria, and AI risk and animal welfare and … and … and …
In the end “it’s unlikely and there are already people working on it” are not arguments abut why the next marginal dollar/hour of effort shouldn’t go towards stopping AGW.
I think you missed the point of my second quasi-paragraph: the argument for working to mitigate AGW, or anything else, should be made on a basis other than a miniscule probability that it is an X-risk. As is the case with malaria, AI risk, animal welfare, and the rest.
AI risk, on the basis that it is credibly alleged the risk is not miniscule. If you can make and defend that allegation, preferably without made-up numbers, fine. If your argument is “I can’t prove that it won’t extinctify humanity, therefore it is really important and we have get to work on it”, you won’t convince me. If your neighbor’s argument is, “It will diminish humanity by a billion lives lost and another five billion lived under extreme suffering”, that’s pretty convincing even though their argument leaves plenty of survivors to rebuild.
Well, there’s a substantial chance, no crazy powers of ten needed, that we get 6 degrees of warming. https://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html
That would render vast tracts of the earth uninhabitable and it’s not clear where humans would get their food. We might be able to keep a small human population going near the poles, but they would be extremely vulnerable. I find this a worrying prospect. [understatement]
If I’m reading this correctly, the six degrees of warming scenario requires fast economic growth over the next century, with energy to be provided mainly from new sources of fossil fuels, and no substantial implementation of green technology or green politics even as warming starts to become much more unambiguous, and even then it requires things to turn out several standard deviations worse than the mean in the model. That’s not impossible, but “substantial chance” seems strong.
I think our disagreement is partly semantic and partly one of substance. A2F2 is pretty much the ‘business as usual’ scenario. I agree it’s avoidable; that’s why I think it’s worthwhile for action to be taken. Why shouldn’t the EA movement be part of that? How does it make sense to think ‘well, someone will eventually do something about this?’
Bear in mind that the danger here is the climate is operating at a delay. There is a danger that waiting until ‘warming starts to become much more unambiguous’ means waiting until it’s too late to stop another, say, two degrees of warming on top of the ‘unambiguous’ changes already apparent.
Preventing global warming is greater than 10^-18 likely to have catastrophic effects, most obviously by ending the current interglacial. I don’t know if that counts as ending the world, but I expect that if you combine a very small chance of ending the interglacial with a very small chance that the resulting catastrophe would set off catastrophic human conflict you would still be over 10^-18.
My point isn’t that preventing global warming is *likely* to have catastrophic effects (although I think it has a significant chance of having net negative effects), but that 10^-18 is a very small number.
Sure, 10^-18 is a very small number.
But it is the number AI x-risk proponents proposed is a reasonable amount to reduce AI risk by, because it saves so many lives.
The question is, why don’t such small numbers apply to AGW risk and risk reduction (in the minds of AI x-risk proponents)?
You might be interested in the transcript of this Harvard EA event on climate change (http://harvardea.org/news/2014/12/04/keith-interview/), this Giving What We Can analysis (https://www.givingwhatwecan.org/blog/2013-08-08/less-burn-for-your-buck-which-climate-charities-are-most-effective-in-reducing). There was also a recent Melbourne EA event on climate change but I don’t know of a transcript/meeting minutes available for that one.
Some takeaways from the climate change discussions I’ve been involved in: reducing factory farming is at least competitive with the best direct interventions for reducing greenhouse gases, but that doesn’t make the goals of climate-change concerned people directly aligned with those of animal EAs because beef is most relevant to climate change and chickens most relevant to animal suffering.
Geoengineering is an underfunded and potentially high-impact idea that deserves a lot more discussion and attention, but anyone who tries runs the risk of politicizing the issue in the way that Gore arguably politicized the larger question of climate change, so lots of people who think we need geoengineering advocacy are hesitant how best to go about it. There are people at GiveWell knowledgeable about climate change and geoengineering options and hopefully they will do a GiveWell style analysis of cost-effectiveness and risk, which I expect to come out strongly positive.
This is really interesting, but can you expand upon the “as Gore arguably politicized the issue of climate change”? Was it less politicized before Gore? Was there ever a chance of having it not be political?
Well, Al Gore’s “An Inconvenient Truth” actually had a section titled “The Politicization of Global Warming”, where he called for global warning to… be politicized. Because it requires us to “change the way we live our lives”, which in turn requires the use of political power.
Anecdotally, as an Australian climate sceptic, my impression of the climate debate was that before Gore both sides paid lip service but didn’t try to actually do anything more meaningful than throwing wind farms the odd subsidy. Post Gore the left moved towards advocating more and more significant carbon controls and the right initially tried to keep saying “us too but maybe not so much” until about 2010 when that mostly collapsed into “bugger it if we’re going to be branded as anti environment anyway we might as well commit to it”.
Thanks, that is helpful. The interview with David Keith was a bit weird, I don’t know why he is so focused on Florida when everything I’ve read suggests that the biggest impact of sea level rises will be on the developing world, where people will find it harder to relocate. I’m also a bit sceptical of the source of GiveWell’s DALY/tonne figures; it comes from one company that provides coping-with-climate-change advice to businesses, and I’m not sure it’s the entity best placed to calculate the impact of climate change on future generations. I’m not saying ‘this is definitely incorrect’, just that I feel like if there were more open discussion of this issue I wouldn’t be scratching my head so much.
About geo-engineering, it is not that underfunded, and there are discussions and research taking place. You can find here the (very long, 120 and 200 pages) reports of the National Academies of Science. The first is on carbon capture, the second on albedo modification.
Shameless self-plug, I’ve written “shortish” (a few thousand words) analysis of those reports, pulling out the quantitative estimates for carbon capacity, costs, temperature moderation, and so on, that are in those reports (see here).
Oh my the part one of the write-up by GWWC is really, really bad. Deforestation is a droplet in the ocean in the problem of increasing CO2 in the atmosphere (but if the charities evaluated focus on this, it is valuable to discuss it, of course). I am uncomfortable with the formulation of the market feedback presented here because it is extremely qualitative and inconclusive. The argument about big thresholds is proving waaaaaay too much (apply it to hostile AI for consistency’s sake). Same with their point 4 in fact. Point 5 is even worse (yes, everything me do today could be undone in 50 years. Apply to malaria nets and smaller time scales).
(part II is absolutely great IMO)
David Keith is a proponent of albedo modification, a low-cost and high-risk temporary mitigation of the temperature (and ONLY the temperature) effects of change of atmospheric chemistry.
I find the beginning of his interview puzzling. He limits the sealevel rise to 2m at most, which is asserting no or very limited glacial debacle (very unlikely if we get to a 4C increase, unimaginable if we get a 6C one). Moreover, even if no one is killed directly, it will be necessary to rebuild/move infrastructure inland/higher, which is going to cost money (and Acheman is right on who is going to be impacted more). He seems to state that the calculation of the costs/benefits of climate intervention is doable (but that even if we get it wrong it would be worthwhile to do?). I agree with his conclusions that it will be state- or supra-state interventions that are going to be needed and that we should give money to lobby for law and regulation changes (but I am a dirty statist anyway).
Not everyone is worried, because warming so far is small (~1% of absolute temperature – think 0C=273K) and slowly developing. Humans deal well with gradual change. We face more disruption in patterns of trade + specialization from regulatory vascillation and market panics than we’ll likely have from warming.
To a first approximation everything is linear (obviously it’s enough of an approximation that we don’t just stop looking for x-risk). At this point, pretty much all the money spent against risk of warming should go into unbiased climate modeling/prediction science. Unfortunately it’s hard to avoid funding a bunch of fanatic greens at the same time.
Demonstrably the majority of people doing apparently unbiased climate science (you can check their work and it looks legit) *also* often sign statements emphasizing a huge risk to humans from warming (which coincidentally elevates their status and funding), but that used to be true of believing the Pope talks to God. This near-consensus is not to be disregarded entirely, though.
If you think either that climate scientists are all involved in a conspiracy to get more funding and status by deliberately exaggerating the effects of climate change (why haven’t other specialities done this so effectively, I wonder?) or that science is no more likely to provide practically useful predictions of the future than when it was a branch of theology, then a) I think we are coming from quite different worldviews and b) I don’t know what your proposed methodology is for finding out about the world.
And isnt it funny how there’s no money in working for oil companies?
I thought it was pretty clear that money wasn’t what motivated academics?
How much respect from your peer-group is there in writing the next Climate report, versus doing research funded by oil companies?
Are you familiar with the term “sagecraft”?
Global warming mitigation has more potential problems than the other things. If it’s not as bad as it’s made out to be, and we put heavy carbon taxes on it, then that could make a lot of people poorer. Throwing a few more dollars towards malaria nets doesn’t face the same risk.
Or if it’s not bad but good.
Warming will have both positive and negative effects, spread over a long and uncertain future. I have not seen any convincing argument for the claim that we can be confident the net will be negative, although it could be. It isn’t as if the current climate was designed for us and so could be expected to be optimal.
While I see what you’re saying and mostly agree with it, it could argued that our current civilization is designed for the last hundred years or so of climate, and so in that way the current climate is optimal. At least, in the Procrustean sense.
Variants of that point are the only a priori reason to expect effects of change, in either direction, to be bad. But we are talking about very slow change. Over a century, farmers will change crop varieties multiple times with or without AGW. With AGW, the changes will include allowance for shifting climate.
And there are a couple of a priori reasons that point in the opposite direction–you can find extensive discussions on my blog.
Also, it isn’t as if we are adapted to a single climate. Humans currently live, prosper, grow crops across a range of climates much larger than the shift the IPCC projects for the rest of this century.
How is this survey structured, and what premises does it rely on, such that it produces a result we could regard as objective?
The thing about “$blog leans $side” complaints that I tend to find particularly irritating is not just that they’re typically subjective, but that they’re offered as if they are objective. I think this blog leans far, far right, and (sotto voce) you can trust me, because I’m world-renowned blog commenter netluvr8692, who’s commented on over 500 mainstream blogs, and you know they’re mainstream because I frequent them and I’m netluvr8692.
Which is to say, whenever I hear “this blog leans to the $side”, my tendency is no longer to believe that this blog leans to the $side, but rather that this commenter leans to the $otherSide. Or that this commenter has been reading so many $otherSide blogs that it becomes their normal and so any slightly $sided blog will stick out like a sore selection bias.
(The easiest thing I could do right now is claim this blog is balanced because it correlates well with the sources I check – The American Interest, Althouse, WaPo, NYT, CSPAN Radio, Reason, and therefore hooray balanced – and then anyone reading that could easily say I’m skewed because everything here is to the right of DailyKos or to the left of PJMedia “which are so mainstream and moderate that it goes without saying”.)
Which brings me back to the above question: how does one measure a comment’s politics, beyond plotting some sort of understood set of issue positions in terms of American party politics (or British or German or Hungarian or what have you), and then computing some sort of distance formula between those and a comment?
And more to the point for SSC: how do you measure the *reason* content of such a comment? Another tendency I have when I read “$blog leans $thataway” is to read it as an implied claim that $blog is reasonable iff it jibes with the way most people think.
So I’ll throw this whopper out there for thought: I think left-right and rational-irrational are utterly independent axes. If they define four quadrants, then there are uncountably many arguments in each quadrant (fact, but probably not surprising or interesting), and a finite but “large” number of arguments in each that are actually made on the internet (my opinion, and probably more interesting). If there’s an imbalance there at all, I would venture that the arguments skew irrational, and that’s all.
This seems like a “perfect is the enemy of the good” reaction to me.
Yes, politics is super fuzzy, but it seems to me that most folks have a pretty good intuitive understanding of where the Overton window sits and roughly where it’s divided, and demanding a super data-driven analysis before we can say anything about the range of politics expressed on a given site is impractical.
I think the real issue here is conflating “irrational” with “other side”, and this is IMO low-hanging fruit to be plucked with a little bit of principle of charity. I would further bet that SSC has a much better chance of getting a decent measure of the political distribution as a community given this community’s proven track record of trying to adhere to charity norms.
“Yes, politics is super fuzzy, but it seems to me that most folks have a pretty good intuitive understanding of where the Overton window sits…”
That’s just it. I keep seeing evidence that people don’t. They both conflate “agrees with me” with “normal” due to repeated exposure, and conflate “agrees with me” with “is reasonable” due to reading enough of it to see whatever reason is contained within. (There’s also a secondary problem where people conflate American scope and Western thought scope, and don’t specify which scope they’re arguing in.)
That said, I do recognize SSC’s charity norms, steelmanning, etc., and agree with your sense of the advantage it confers. Indeed; it’s why I think my question might get some interesting responses here, relative to other places I could pose it.
Not really? It reads like he’s saying “the metrics we’re using are totally useless–how can we get something that’s at least basically correct”.
That seems more like “bad data is worse than no data” rather than “perfect is the enemy of good”.
Older posts related to Figure/Ground Illusions: Against Bravery Debates, All Debates Are Bravery Debates
Can I get a link to this survey of the SSC commentariat? And when was it originally posted?
Well… your case here might be helped if you’d link me to any post where you’d ever described a neoreactionary as “a Vogon in a skin suit” (in a post where you complain about feminists using thinly-veiled body-shaming against nerdy men, no less). Or, more generally, if/where you’d accused neoreactionaries of secretly believing completely different things from what they say. Or accused neoreactionaries of using intellectually-dishonest arguments to retroactively justify a sadistic desire to inflict suffering on people different from them. &c.
When there’s an neoreactionary movement that starts Twitter/Tumblr shaming people (particularly from positions of social power), then maybe Scott will.
1. You mean, like reproductively viable ants?
2. I carefully avoided Object-Level claims, since the main thrust of “In Favor of Niceness, Community, &c.” was that mud-slinging isn’t justified based on the qualitative threat level (Scott, and the commentariat here, excoriated “Andrew Cord” for claiming that it was), therefore it’s wrong to dehumanize Group X even if they’re More Evil than Group Y.
I had to look that reference up. What percentage of GG’ers do you think have read Moldbug? Or even heard of the neoreaction?
I’m quite confident on <1% for the first question, but "heard of" is pretty hard to answer.
Also, because the misrepresentation is annoying, I'll take the opportunity to link to lefty David Auerbach from Slate trying to understand why gamergate didn’t turn to the right with time, as he expected them to: http://www.twitlonger.com/show/n_1smgr7s
(e.g.: “1. Gamergate consists primarily of secular, vaguely liberalish types, alongside a mix of moderates, libertarians, and heterodox conservatives. None of them are particularly keen on right-wing dogma.“)
Ladies and gentlemen, I give you outgroup homogeneity.
It was a bit of a cheap shot, I admit. My second point still stands, though.
Any description of reproductively viable worker ants as being a neoreactionary movement uses a very unusual meaning of the word “neoreactionary,” at least as it would be used on this blog.
It’s completely wrong, but sadly not all that unusual — it’s the equivalent of referring to Obama as a Communist.
You’re right that it’s unusual by the unusually high standards of SSC. 😉
Yeah, I confess I sometimes mentally switch around the SSC meanings of said terms with how the rest of the internet uses said terms.
“Reactionary” has been repurposed for use in situations where “brogressive” and “privileged liberal” are considered too informal or not sufficiently hostile, respectively. And adding “neo-” or “crypto-” just makes everything sound sinister.
Neo-philanthropist. Neo-ethical. Crypto-kitten. Damn, you’re right.
The term doesn’t apply to gamergate in any of the ways the “rest of the internet” uses it. Unless one were to admit that SJ is using it as a blanket term for “anyone who disagrees with it”.
I have a new nickname.
It fits the opening paragraph of the Wikipedia page of “Reactionary” pretty well, as well as that dictionary-box thing that pops up whenever one Googles the word “Reactionary.”
Wiki opening paragraph:
No, it seriously doesn’t.
So would wanting women to have voting rights if we somehow ended up in dystopic future where they didn’t. Hell, the Ukrainians were reactionaries for wanting to return to the previous political status quo of not being invaded by Russia!
Your argument proves too much. Furthermore, reactionary and Neoreactionary are not synonyms.
@Cauê
Why not? They think the current status quo is “SJWs have control of games journalism and they didn’t used to.”
@Sylocat
That’s some weird framing, and a weird and narrow enough use of “reactionary” that you are probably the first one to use it to mean this when referring to gamergate.
Really, now, did you not mean to connotationally smuggle something else there? Like the political positions people actually think about when they think of those who “favor a return to the status quo ante, the previous political state of society, which possessed characteristics (discipline, respect for authority, etc.) that he or she thinks are negatively absent from the contemporary status quo of a society“?
Wait, I’m the first one to use it that way? I thought all us feminists spouted off all those buzzwords whenever we saw something we didn’t like, because our primitive lizard brains can’t handle… something.
In all seriousness, I do think GG fits the “(of a person or a set of views) opposing political or social liberalization or reform” that Google’s dictionary provides, as well as that whole “yearning for a past (that never actually existed)” thing, judging by the number of rants I’ve heard about people saying how much better it was in the Good Old Days before the mean old SJWs invaded and tried to force gaming to change to suit their whims.
I’ve already admitted it was a cheap shot. I got a bit snarky because I was carefully avoiding making value judgments on the actual merits of the human beings involved, and Urstoff immediately started the “But your side is objectively worse” thing (IE, exactly what I got accused of doing when I replied).
But whatever. This probably ain’t the thread to be changing people’s minds about that. Sorry I brought it up.
In my reading Urstoff gave an example of something SJ does and neoreaction doesn’t. It doesn’t mean one is “objectively better”. But it means it’s weird that, when Scott reacts to X when the Greens do it, one would blame him for not reacting to Y when the Blues do it, and then conclude that he’s reacting to the Greens, not to “people doing X”.
Yes, I do think you were the first to call GG “reactionary” while maintaining that this isn’t intended to mean at least one of “right-wing, misogynistic, racist, homophobic, transphobic, etc.”, but only to mean this one very specific thing. Which doesn’t pass the smell test.
Urstoff wrote:
I’m also not seeing how this statement could reasonably be interpreted as saying or even implying that SJW is “objectively worse” than NRX. Like Caue, I interpret that as making a claim of fact, that SJW tends to Twitter/Tumblr shame people from positions of power, while NRX doesn’t. From my experience, this claim seems accurate.
I believe that the point of Urstoff’s post is that what Scott Alexander does is call out Twitter/Tumblr shaming from positions of power as things he considers bad, and because of how SJW and NRX behave, this results in him calling out the former but not the latter.
Except, as I have explained fifty gazillion times already, I wasn’t criticizing Scott for “calling out” feminists when they do meany-pants things. I was criticizing him because his usage of vicious personal insults and accusations of intellectual dishonesty only go in one direction, which is particularly ironic since, in “In Favor of Niceness, Community and Civilization,” he outright said that he thinks it is wrong to use mud-slinging and falsehoods even when you think The Other Side “deserves” such attacks.
I don’t perceive Scott Alexander as having hurled vicious personal insults at any individual in SJW, so I can’t really speak to that. If you perceive that he did, and to an extent beyond what he’d hurl at the NRX side, then I guess we have a difference in perception. Notably, I don’t perceive “Vogon in a skin suit” or variations thereof to be vicious or an insult within the context in which he used it.
Whether an accusation of dishonesty is an attack depends on whether one believes dishonesty is a bad thing. I believe Scott Alexander, like myself, sees SJW regularly use their sizable platform to proudly proclaim their dishonesty as a virtue. I haven’t seen NRX use their tiny platform to proclaim anything of the sort, so it makes sense to me that he would accuse one side of dishonesty more than the other.
As a member of the Blue tribe, I’m much more exposed to SJW than NRX, so I acknowledge that my perception may be inaccurate, and it is very possible that Scott Alexander is also falling victim to a similar failure in perception. I also believe that anyone criticizing him may be falling victim to the same, just from the other end.
@lvlln:
You understand that
I fail to understand how describing someone as a “Vogon in a skin suit” could be described as anything other than an intentional insult.
Note the “not actually evil” in the description. In the comparison, it is the Vogons that were insulted by the person that see no problem in labeling people “rape-loving scum” for doubting Ms Mangum.
Among many, many other cases of hateful, sadistic hounding of anyone crossing her (or just offering a vulnerable side).
Sylocat is using the term “neoreactionary” to describe GGers because his SJW friends do. There isn’t really much more to it than that and his justifications are very blatantly post-hoc.
It should go without saying, but apparently we have to do so for the sake of clarity: “reactionary” politics requires a contiguous frame of reference to be meaningful. For GG to be “reactionary”, there would have to be a Glorious Past where Kotaku and Reddit and so on a) existed, and b) didn’t censor things they didn’t like. I don’t see many people in GG eager to return to a games press that consists of four people in a bunker at Future Publishing pretending to understand how Amigas work and cut-and-pasting screenshots onto reviews. The frame of reference has clearly changed and GG are against specific properties of that new frame, just as the SJWs are.
All this is, of course, a poorly veiled power grab.
Thank you once again, Zorgon, for being here to tell me that I secretly possess entirely different opinions and goals from the ones I have repeatedly stated. I get so gosh-darned confused about what’s going on inside my primitive little skull.
Heck, maybe I SHOULD start using “Neoreactionary” to mean “Anyone who isn’t a feminist,” since if you replaced every instance of the word “Neoreactionary” in my original comment with the phrase “Anyone who isn’t a feminist,” my original comment would still be true, and would make my actual point much better than it apparently wound up doing.
Did… did you… just accuse me of mind-reading?
When talking about GG? The group that, despite its now somewhat storied history of donating to the cause of women in games and hunting down people who send death threats is still painted by your side as being a “misogynist terror group”? For effectively no reason other than that they oppose your side?
You don’t have anything resembling legitimacy in that regard, Sylo. No anti-GGer does.
Zorgon, come on, turning this into Twitter would make nobody happy.
Thank you. Seeing you use a reference like that was very informative.
You’re welcome.
No, GamerGate is not a Neoreactionary movement.
One is reminded of the joke about the Jew caught reading Nazi propaganda – “but it’s so nice reading about how we run the media and the banking corporations!”.
That said, the idea that GG is a Neoreactionary movement is not new.
I also think, in the discussion of which way the commentary here swings, that it’s both balanced and extremely right.
Oh man. I love that picture so damn much.
(it’s almost certainly trolling)
Latest one of those for GG is that apparently, according to a Seattle paper, they are responsible for 30-40 hacking attempts on Zoe Quinn’s website a day!
Never mind, of course, that GG long ago stopped caring about ZQ. The important thing is that all the bots which trawl the Internet looking for poorly secured servers are now under GG’s control! The elite hacker known as Anonymous must be quaking in his boots at such a glorious e-peen.
>Can I get a link to this survey of the SSC commentariat?
By now it is a myth
>And when was it originally posted?
I’d like to know that too, the problem with this kind of thing is that, say, 8 months, is an eternity in the context of high activity internet communities.
>Well… your case here might be helped if you’d link me to any post where you’d ever described a neoreactionary as “a Vogon in a skin suit” (in a post where you complain about feminists using thinly-veiled body-shaming against nerdy men, no less). Or, more generally, if/where you’d accused neoreactionaries of secretly believing completely different things from what they say. Or accused neoreactionaries of using intellectually-dishonest arguments to retroactively justify a sadistic desire to inflict suffering on people different from them. &c.
The problem is that Neoreactionaries don’t do anything, how can he condemn their behaviour and their mean activism and their shitty columns in mainstream publications when all they do is write blogs and then subsecuently comment on each others’ posts that only them and Scott read?
Amanda Marcotte and Laurie Penny didn’t exactly spam Scott Aaronson with links to their articles about him, yet Scott (Alexander) felt the need to step in and defend him from their brutal assault.
I think the difference in size of audience is the salient one here, not whether commentary was specifically sent to someone. I don’t claim to know for sure, but it wouldn’t surprise me if Marcotte and Penny each reach many more and wider ranging people with their commentary than all neoreactionary bloggers combined (caveat: I haven’t really looked much at the neoreactionary movement – it’s possible that there are mainstream nrx voices that eclipse that of Marcotte or Penny that I’m not aware of).
If I perceived nrx voices as being anywhere near as loud or powerful or commonly heard as SJW ones, I certainly would agree with the criticism that Scott Alexander seems to spend too high amount of effort calling out the latter compared to the former. I do not perceive this.
I don’t see too many of the SJ crowd linking to much of what Marcotte writes approvingly. But maybe I run in better circles than the ones you’ve encountered.
But even if you do come back with a gazillion link roundups to prove me wrong on that, what’s the cutoff point? What is the ratio at which one side is secure enough that they can afford to have their humanity respected less?
@Sylocat, the Crypto-Kitten
I don’t know. I also don’t know what the relevance that question is to the topic of discussion, since I don’t perceive Scott Alexander as ever having respected anyone’s humanity less (in fact, he’s regularly accused of doing it too much). Do you have any examples? I certainly don’t think calling out someone’s bad/dishonest arguments or using a metaphor to highlight someone’s lack of empathy somehow qualifies as respecting someone’s humanity less.
>Amanda Marcotte and Laurie Penny didn’t exactly spam Scott Aaronson with links to their articles about him, yet Scott (Alexander) felt the need to step in and defend him from their brutal assault.
I don’t get what you’re trying to say here, and I think I’m expressing myself poorly. Allow me to reset (please):
We have two groups here: SJ People and Neoreactionaries. (Using the term “SJ People” isn’t great, because no amount of intersectionality can make up for the fact that we’re putting a lot of different inerest groups that want different things in the same bag, but whatever).
On the content of their ideologies, Scott agrees a lot with the first one, and disagrees a lot with the second one. So, when it comes to content, Scott has argued for the former (https://slatestarcodex.com/2013/04/20/social-justice-for-the-highly-demanding-of-rigor/) and against the latter (https://slatestarcodex.com/2013/10/20/the-anti-reactionary-faq/) (I’m sorry for the horrible format, but not sorry enough to overcome my lazyness).
However, when it comes to behaviour, the Neoreactionaries are mostly harmless (there was this guy who was an ass on twitter, apparently, but the cabal exiled him or something like that) and patently irrelevant, bitching them out on whatever bad behaviour they might display is kind of equivalent to scolding a child you’re not related to (or a teacher of) for being bratty: You can do it, and you might even be right, but unless you’re an old lady, you’re just going to look (and probably feel) petty. Social Justice is a large movement with significant social traction and non-negligible political power, it can be and has been harmful, and as such it is a valid target of criticism in this front.
As for your specific complaints:
>any post where you’d ever described a neoreactionary as “a Vogon in a skin suit” (in a post where you complain about feminists using thinly-veiled body-shaming against nerdy men, no less)
I agree that Scott went overboard with the insults there, it didn’t add to the content of the post and detracted from their presentation. I’m not sure what you mean in the parenthesis, I mean, yes, Vogons are ugly, but from context it’s pretty clear that’s not the point of the analogy.
> Or, more generally, if/where you’d accused neoreactionaries of secretly believing completely different things from what they say
Well, it seems like it’d be pointless for them to do so, right? Besides hipserdom, which they definitely get accused of. Hell, another discussion in this post is all about whether they really mean it when they say that the population of one race being less intelligent doesn’t make them morally inferior in value.
>Or accused neoreactionaries of using intellectually-dishonest arguments to retroactively justify a sadistic desire to inflict suffering on people different from them.
Well, not only do they not do this, I don’t think they could even if they wanted to, because they don’t have the numbers to pull it off. On the other hand, there’s at least one example of this behaviour within the SJ community (requireshate, I personally don’t know any others)
Scott hasn’t got around to posting the survey yet. He says in the OP that he will get around to it eventually.
Yes, thank you, I did actually see that part of the post (hence why I mentioned it in the first place). I was asking what the questions were.
> Well… your case here might be helped if you’d link me to any post where you’d ever described a neoreactionary as “a Vogon in a skin suit” (in a post where you complain about feminists using thinly-veiled body-shaming against nerdy men, no less).
But Amanda Marcotte is a Vogon in a skin suit, so I’m not sure what you’re disagreeing with here.
As tempted as I am to agree, it doesn’t do the standards of discourse here any favors to say so.
So…what are your politics?
I’m testing a hypothesis.
I politically identify as a toaster.
A brave little one, no doubt
Oh, c’mon. I’m trying to come to a more accurate understanding of the universe here.
The Toaster Party has nuanced opinions on agricultural subsidies!
Okay, yeah. I’m being a little tongue-in-cheek here, and anything like this has to be done carefully if you want to avoid collapsing into the unproductive geek “politics is dumb monkey status games for normals” thing. But I really do think that consciously maintaining political affiliations — as opposed to object-level political opinions — is bad news for most of us. So call me a political skeptic.
We could argue over how much evil has been done by people pushing their particular faction’s utopia vs. pursuing their own personal self-interest, but I don’t think it’s very arguable that the former is substantial and has a mixed record at best of curbing the latter.
SCP-426, is that you?
I’m not a big SCP fan.
Since you asked nicely…
Mainly I hate people giving orders about what to think, or pushing their politics via mobbing and delegitimization. I have all sorts of opinions on other political issues but this is what gets me exercised and, if we’re being totally honest here, perhaps less polite and charitable than I should be. It’s something I should work on, especially in communities like SSC where people are more reasonable than average.
(But that said: Marcotte and her fellow-travellers in the media can take a long walk off a short pier, seriously.)
Never commented before, but something struck a personal chord on this one:
I will read your comments on SJW at the times in which my bubble seems to be filled with nothing but increasingly irrational diatribes against them and then irrational ripostes. Maybe it is exactly the figure/ground problem, but reading your writings that tease out the positive qualities (while neither giving a free pass to the negative nor using the skew of the negative to summarily d