Links 8/15: Linkering Doubts

Okay guys, there’s an island in San Francisco Bay on sale for five million dollars. That’s, like, less than a lot of houses in San Francisco. Surely some of you can get together and figure out something awesome to do with this?

Scott Aaronson speaks at SPARC on Common Knowledge and Aumann’s Agreement Theorem. I get a shout-out at the bottom.

Reddit’s r/china has some interesting looks into the Chines microblogosphere. You can find the comments of ordinary Chinese web users about the stock market crash (1, 2, 3, 4, 5, 6) but for a real look into the Chinese id, see what commenters think when somebody’s daughter wants to marry a Japanese person.

Speaking of r/china, I was originally confused by their references to “Uncle Eleven”. Turns out to be a nickname and censorship-route-around for Chinese leader Xi Jinping. Can you figure out why? (answer).

The Trobriand Islanders have a system of status based on yams, and Wikipedia describes it as charmingly as possible.

More compound interest being the least powerful force in the universe – an Indian housing lottery offering slum-dwellers the chance to move to better neighborhoods has no effect fourteen years later.

Jeff Kaufman weighs in on effective altruism, AI, and replacement effects.

Related: 100,000 Hours – “It’s a common misconception that we recommend all effective altruists “marry to give,” or marry a high-net-worth individual with the intent of redirecting much of their wealth to effective causes. In retrospect, we emphasized this idea too much in our early days, and as the most controversial of our suggestions it attracted a lot of press. In fact, we recommend that only a small fraction of EAs pursue MtG. MtG is probably best suited to attractive people, those with good social skills, those who fit in well in high-status and wealthy circles, and women looking to marry men.” Clue for the clueless: THIS IS A JOKE.

We know that religious people are happier and more mentally resilient than non-religious people, but the standard explanation is that going to church provides a sense of community and social connectedness. But a new study finds that religious activities are better for your mental health than other forms of social participation.

Matching Platforms and HIV Incidence – online and mobile dating sites increase HIV prevalence when they enter an area. The quasi-experiment suggests they’re responsible for about a thousand extra HIV cases in Florida.

Uber for health care in the form of doctors making on-demand house calls. It’s easy to dismiss this as a toy for the ultra-rich, except that the price – $100 to $200 per visit – actually isn’t too bad compared to what you might otherwise have to go through to get a doctor if you’re not on insurance.

Argentina sort of has open borders already. Why aren’t people raising money to send Africans to Argentina? Or are we worried that if too many people take advantage of the opportunity Argentina will change its mind?

Further adventures in Euclidean geometry: the nine-point circle. Also, mathpages.com isn’t afraid to ask the hard questions, like are all triangles isoceles?

UK admits e-cigarettes are safer than smoking and a useful way to fight tobacco addiction.

Scientists: Modafinil seems to be safe and effective “smart drug”. “We’re not saying go out and take this drug and your life will be better,” [we’re just presenting lots of evidence that this is the case].

Patient blows up hospital ward after lighting cigarette in hyperbaric oxygen chamber. The scary thing is that I can totally imagine the sort of person who would do this.

Finally, a candidate with an idea for out-of-control higher education costs that isn’t just another form of tulip subsidy: Marco Rubio proposes a private equity model a la Milton Friedman.

70% of Pakistani medical students are female, but only 23% of doctors are. A medical education is a status symbol in Pakistan, and women seem to be pursuing it to increase their value in the marriage market, then getting married and dropping out of medicine. As a result, Pakistan spends a lot of money on medical education and is drastically short of doctors. What do they do? Does your opinion change if I tell you that people involved in US medical education have told me we have a similar problem here? (albeit much less severe, and more related to child-rearing than marriage)

The FDA has been approving lots of stuff lately.

Finally, a smoking gun that one of the country’s leading climate change experts was engaged in perpetrating a fraud of massive proportions! Unfortunately for oil companies, that fraud was pretending to be a CIA spy in Pakistan to get out of work.

A more serious problem: most Kyoto-Protocol-approved carbon offsets from Russia and Ukraine may be made up for profit.

Ex-President Jimmy Carter is metal: “I may be dying, but I am going to take an entire species with me.”

Dolphins discover Goodhart’s Law.

Burma’s Superstitious Leaders: “The decision in 1970 for Burma to change from driving on the left-hand side of the road to the right-hand side was reportedly because the General’s astrologer felt that Burma had moved too far left, in political terms.” You say ‘astrologer’, I say ‘social priming theorist ahead of his time’.

Related: Get your anti-priming tin foil hats!

A while ago I argued with Topher about the degree to which people used to say refined carbohydrates were good for you. Topher said no one important had ever said anything like this, and I said some people had sort of said things that implied this even if no one had said it in so many words. Maybe we were both wrong: there was (and still is) a substantial body of literature directly suggesting that “a high-carbohydrate, high-sugars diet is associated with lower body weight and that this association is by no means trivial”. Sigh.

All I want for Christmas is augmented reality sand that turns into a relief map.

How the Japanese do urban zoning.

Plan to solve problem by releasing 25,000 flesh-eating turtles failed due to “lack of planning”, say government officials.

A Neural Algorithm of Artistic Style [warning: opens as PDF]. Unless you’re a machine learning specialist, you want to skip to page 5, where they show the results of converting a photo to the style of various famous paintings.

You’ve probably heard by now that the psychology replication project found only about half of major recent psych studies replicated. If you want you can also see the project’s site and check out some data for yourself.

Related (though note this is an old study): journals will reject ninety percent of the papers they have already published if they don’t realize they’ve already accepted them.

Related: this article on the replication crisis has a neat little widget that lets you p-hack a study yourself and will hopefully make you less credulous of “economy always does better when party Y is in power!” claims.

A study comparing the association between twins (really interesting design!) finds that genetics seems to determine the degree to which fast food makes you obese. That is, people with certain genes won’t gain weight from fast food, but people with other genes will. Still trying to decide what to think about this.

The VCs of BC – trove of cuneiform tablets on the ancient Assyrian economy reveals that they had institutions similar to our stocks, bonds, and venture capital. Also really interesting exploration of the gravity model of trade and what it means for economics today.

Sam Altman, “head of Silicon Valley’s most important startup farm”, says that “if I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Meanwhile, on my blog, people who don’t have a day job betting fortunes on tech successes and failures continue to say they’re 99.9999999% sure that even $1 million is too much.

Governors’ Mansions of the United States. I wouldn’t mind being governor of Idaho. On the other hand, I think becoming governor of Delaware would be a step down for a lot of people.

Evidence of pro-female hiring bias in online labor markets.

Speaking of confidence and probability estimates, Scott Adams goes way way way out on a limb and predicts 98% chance of Trump winning the Presidency. While I super-admire his willing to make a specific numerical prediction that we can judge him on later, I wonder whether he’d be willing to bet me $100 at 10:1 odds (ie I pay him $100 if Trump wins, he pays me $1,000 if Trump loses), given that if his true odds are 50:1 that should be basically free money. Or, of course, he could just play the prediction markets and have even better chances. If not, then despite his virtue in giving a number at all, I can’t believe it’s his real one.

Schwab study looks at how five different strategies for market timing would have worked over the past twenty years.

Retrospective study: the “STEM pipeline” stopped “leaking women” in the 1990s; since that time nothing that happens after the bachelors’ level explains underrepresentation of women in any STEM field.

High levels of national development make countries more likely to be democracies, but democracy does not seem to cause higher levels of national development. Related: relationship between peace and democracy may be spurious.

Jerry Coyne weighs in on the recent “Holocaust trauma is epigenetically inherited” study. Please consider epigenetic inheritance studies guilty until proven innocent at this point.

The stories behind Russian subdivision flags.

Doctors Without Borders makes a plea against cracking down on India’s cheap generic pharmaceutical industry, the “pharmacy of the developing world”.

Remember that story a couple of months ago on a sting that proved most big supplement companies don’t contain any of the active ingredient at all? Now there’s some argument going on that the sting was dishonest and bungled their tests, and the supplement companies were perfectly fine all along. Related: apparently you can’t sue a prosecutor for anything they do, even if it’s really stupid and destroys your business.

Nonconservative whites show a preference for black politicians over otherwise-identical white politicians in matched-“resume” studies, leading to greater willingness to vote for them, donate to them, and volunteer for them. I don’t think the paper looked at conservative whites, and I’m curious what they would have found if they did.

Further suggestion that genes have more effect on IQ in the rich than the poor. A koan: this study found shared environment only affects IQ for people below the tenth percentile in economic status. The tenth percentile of income is below $12,000. But fifty years ago, probably most people were below $12,000, and fifty years from now, maybe nobody will be below $12,000. Do you think this same study done in the past or future would repeat the finding of a less-than-$12,000 threshold, repeat the finding of a less-than-10% threshold, or something else? Why?

Higher school starting age lowers the crime rate among young people. Four day school week improves academic performance. It would probably be irresponsible to sum this up as “basically the less school you have, the better everything goes,” but I bet it’s true.

Currently #1 in Amazon’s Political Philosophy section: SJWs Always Lie by Vox Day. Currently #2 in Amazon’s Political Philosophy section: John Scalzi Is Not A Very Popular Author And I Myself Am Quite Popular, by somebody definitely going the extra mile to parody Vox Day.

Related: did you know that Vox Day once formally debated Luke Muehlhauser on the question of God’s existence? It went about as well as you would expect.

This entry was posted in Uncategorized and tagged . Bookmark the permalink.

606 Responses to Links 8/15: Linkering Doubts

  1. Pffft, we had one of those sandbox relief maps when I was a UROP a the Tangible Media Group back in ’03. Ours computed shadows and drainage too!

    http://tangible.media.mit.edu/project/sandscape/

    • tgb says:

      The linked one does drainage and water physics too. The video the gif is from shows them making waves by dropping sand into a lake. Looks like a lot of fun!

    • 27chaos says:

      Coming soon: The Sim City Board Game

  2. Stephen Frug says:

    While we’re on links: Scott, when I saw this article by David Roberts (who writes great if terrifying stuff about Climate Change (formerly for Grist, now for Vox)), it made me think of you. Quite possibly unfairly. So I’m curious what you make of it.

    http://www.vox.com/2015/8/27/9214015/tech-nerds-politics

    Thanks!

    • Scott Alexander says:

      I didn’t like it and was wishing very hard that if I ignored it then that entire class of article and possibly Vox as a whole would go away.

      Since I guess that’s out of the question, I liked nostalgebraist’s response.

      • Stephen Frug says:

        Ah well, apologies if I pushed something you were ignoring in your face.

      • Dude Man says:

        While I agree with nostalgerbraist’s view on the article, I do think that the original Vox article describes a viewpoint that is somewhat common among people who don’t follow politics. There seems to be this perception that, since both sides lie and cheat, they both must be terrible and wrong about everything. This attitude persists because the type of person who has it is generally not interested in correcting it. I don’t think this view is more common among tech nerds than the general public, and nostalgerbraist is right to attack the author for conflating this view with genuine libertarianism.

        The second-half of the article is just the author attacking Republicans, but what do you expect from Vox?

        (I apologize for continuing this discussion about an article you wished to avoid)

        • Dan says:

          Does anyone know of a good response to claims like “the GOP has moved much further right than the Democratic Party has left”, from that Vox article? Without having thought about it too much it strikes me that it’s not even clear what this means.

          • Jaskologist says:

            It means, “My views are by definition moderate and Good. The Democrats are pretty close to my views, and the Republicans are to the right of them.”

          • Telnar says:

            I can think of two different coherent things it could mean (if it turns out to be more than just an expression of congruence with the writer’s preferences), and have no idea which is common usage:

            1) We have a measure of where particular ideas fall on the left-right spectrum. If you normalize that measure in year X, we find that in year Y the Republicans have moved this distance.

            2) We have a measure of where particular ideas fall on the left-right spectrum relative to views of the general public in the same year. The Republicans had offset a from the general public in year X and offset b from the general public in year Y.

            If the author means (2), then the author’s real comment would in effect be that Republicans have not abandoned positions which became less popular over time.

          • houseboatonstyx says:

            In the US, it seems that both parties have moved to the Right since the 1950s, if you compare their national platforms then and now. Eisenhower’s is more Left than than any Democratic platform recently (since ~1980 at least).

          • FJ says:

            The most generous interpretation of the claim (which is predicated on statistical measures of how often party members in Congress vote together) is, “While both parties have become more ideologically coherent in the past 100 years or so, Republican congresspeople have traveled further along the spectrum from incoherence to coherence than their Democratic counterparts.”

            The phenomenon is definitely real, and it’s part of the “Great Sort” of the two parties away from regional blocs into something more akin to actual ideological movements. But it’s hard to translate that into ideological “extremeness,” in part because the range of live political issues hasn’t been consistent over time and the relative importance of those issues is hard to account for statistically.

          • walpolo says:

            But to the extent that the issues *have* been consistent, Democrats have often adopted policies that used to be Republican policies (affirmative action and the EPA were both instituted by Nixon, Obamacare proposed by Republicans in the 90s), whereas Republicans have adopted policies that are more radical version of their earlier views, like the Grover Norquist approach to taxes.

            This really only applies to economic issues, though. Indeed, one could potentially argue that Dems have moved far to the left on social issues.

          • Chalid says:

            The argument is that both parties used to (on average) be closer to the median voter than they are now, but currently, the Republicans are further right from the median voter than the Democrats are left.

          • PSJ says:

            He is referring to DW-Nominate studies. This sort of thing. So if you are set in saying it’s untrue, show why DW-Nominate is a poor form of measuring ideology.

        • HeelBearCub says:

          nostalgerbraist lost me when he called this cartoon incomprehensible.

          Maybe you don’t agree with the point, but it should be very clear what the author is saying.

          And then this phrase is, while perhaps technically true, is essentially false: “as if libertarians must be apolitical because they are anti-government”.

          I honestly wonder, if the original article did not use the phrase “tech nerd” and simply said “there is a cohort of intellectually curious people of which Elon Musk is a member” whether Scott would have such strong negative reaction to it.

          • 27chaos says:

            I mean, the proper solution to shallow understandings of politics is more nerdiness, not less. The author is attacking the very people who are best positioned to execute his plans and hopes, because he’s conflated them with the actual problem, because he likes to scoff more than he likes to understand.

          • HeelBearCub says:

            @27 Chaos:
            I assume when you say the author, you mean the author of the original Vox article. If not, what follows probably makes no sense, so read it as if the assumption is true. 😀

            People here are in defense mode about the use of the word nerd. If anyone uses it and is the least bit critical of anyone who is given that label, they immediately get ready to “counter-attack”.

            The Vox author specifically points out that there are people who go already deep, deep, deep into understanding politics, he specifically refers to them as politics nerds and he encourages “Wait But Why” to do an explainer on politics specifically because he wants more “nerdity” in politics.

            Basically he did exactly what you say you wanted him to, and you ignored it.

          • Brad says:

            This is one reason I don’t like this whole reclamation thing. It’s a hateful little word, and I don’t see any sufficient reason to try to overload it.

          • Cauê says:

            And then this phrase is, while perhaps technically true, is essentially false: “as if libertarians must be apolitical because they are anti-government”.

            What’s false about it?

          • Nornagest says:

            What’s false about it?

            You can want to reduce the influence of government in most areas without wanting to completely abolish conventional forms of government, which opinion could coherently be called libertarian.

            And on the other hand you can want to abolish conventional forms of government while still holding opinions on how the polis should relate to itself after you do that, which opinions could coherently be called political.

          • Cauê says:

            Nornagest, I agree.

            Maybe I’m misreading it, but I thought nostalgebraist was disagreeing with “libertarians must be apolitical because they are anti-government”, and then HeelBearCub was disagreeing with the disagreement.

        • roystgnr says:

          “since both sides lie and cheat, they both must be terrible and wrong about everything” is incorrect, but it would have been correct if it just stopped after “terrible”. (Okay, not correct, but as close as any statement which refers to groups anthropomorphically can be)

          There’s also a very slight oversimplification in the nostalgebraist response: “as if libertarians must be apolitical because they are anti-government!” Although it’s easy to think of hilariously-extreme counterexamples here, there actually is a subset of libertarian thought which connects the two. First you look at the expected utility of voting, and observe that in most practical cases voting is either grossly irrational or extraordinarily altruistic. A well-informed vote is exactly the sort of non-excludable non-rivalrous good that was supposed to most clearly require government intervention in the first place. Next you count the number of extraordinarily altruistic voters (but it turns out that “extraordinary” and “common” are antonyms) and you compare to the number of grossly irrational voters (but it turns out to still be gross, not just a gross), and you realize the former is just rounding error. Finally you wonder, “if the best-functioning system of government encourages decisions to be made by the most irrational fraction of the populace, how much power should that government hold?”, and the obvious answer has you halfway to libertarianism.

          • Urstoff says:

            That line of thought seems more anti-political than apolitical. A lot of public-choice-inclined libertarians don’t see much of a point in voting in large elections, but that doesn’t mean that those types of libertarians never participate in political activities. Think of them as the Effective Altruists of politics: where can my minuscule individual political power do the most to move things in a libertarian direction?

    • Pku says:

      Randall Munroe once said ” I had a hard time with Ayn Rand because I found myself enthusiastically agreeing with the first 90% of every sentence, but getting lost at ‘therefore, be a huge asshole to everyone.'” I have a similar problem with Vox – I find myself agreeing with (or at least, finding several interesting points in) the first 90%, then getting lost at “therefore, the world’s problems are all because of racist old white dudes”.

      Also, as an aside comment, the original article they were criticizing ( http://waitbutwhy.com/2015/06/how-tesla-will-change-your-life.html ) is fantastic (though probably a lot of people here have seen most of the stuff in it before).

      • Can I be the first person to say that I find it really confusing to be reading a conversation about Vox magazine and Vox Day at the same time? I read this 90% of this comment as a reference to Vox Day, but got lost at “racist old white dudes”. (Then I realized that it was the Other Vox.)

        • ADifferentAnonymous says:

          Your are not alone. It’s even worse than Stigler and Stiglitz.

        • Daniel Speyer says:

          I suspect it’s too late to be the first person to say that, but you’re the first one to say it in this thread (I’ll be the third)

        • Pku says:

          The first time I heard about Vox Day (last week) my reaction was “what, they have a Vox.com holiday now?”

      • Nathan says:

        My favourite one was where they explicitly rejected a philosophy article because it could possibly be construed to imply opposition to abortion, and then claimed that no, we totally didn’t do it it because of that, actually we’re very impartial on the whole abortion issue. I mean, sure, we wrote a huge essay on how Republicans are vile and evil for forcing women to endure pregnancy EVEN IF they actually believe that doing otherwise literally means murdering a human child but on the other hand, look, we once ran some short quotes from pro-life protesters!

        But at least they don’t run serious pieces on the crucial importance of the latest hashtag or meme going around social justice circles anymore.

      • Edward Scizorhands says:

        I’ve been pointed to that WBW about Tesla and so far I’m underwhelmed. There is a bunch of stuff left unsaid which needs explaining. Like “clean == no CO2,” when wood-burning is very unclean if you have to live next to it. Or “there is no more oil being made,” which I say is likely to be true, but there are some serious people who say there is more oil being made. Or “having to rely on oil imports is risky,” except that because oil is so fungible oil-producing countries cannot reliably cut you off from the world market. (Natural gas has different infrastructure requirements.) And I’m still only halfway through.

        I normally wouldn’t criticize except this article has been made to seem like the be-all end-all of energy analyses, both by people linking me to it as well as by the article itself.

        • bluto says:

          His worst sin was just ignoring why gas won eons ago. Energy density either by weight or volume aren’t even on the same scale. After that the whole article became something of a bore.

        • Pku says:

          Specific issues addressed below. I agree that this isn’t an end-all, but it’s definitely a good introduction to some of the major issues. More to the point, it actually got me excited about the future again, which is an awesome feeling I haven’t had in a long time.
          There are some nitpicks of various degrees of importance he didn’t include (to be fair, it’s already pretty long and it does seem like he made a genuine attempt to be fair), but they don’t seem enough to change the conclusions much.

          I don’t think the wood-burning is a serious flaw (it’s not like he’s suggesting we switch to using it as a fuel source), and it is a good way to give some intuition on the carbon cycle (I thought wood burning produced CO2 until a couple of months ago when I learned this on QC).
          About “there is no oil being made”, I didn’t know there were people who say otherwise. This seems unlikely both because it contradicts what I know on oil production (which to be fair isn’t that much) and because it seems like oil production is something that should happen on geological timescales (what kind of evidence do these people have?) I agree with you that having to rely on oil imports isn’t that bad. I also felt like he didn’t take the “long tailpipe” theory seriously enough – it seems to be broadly wrong, but even his calculations show that there are a lot of places where you’d be better off buying a prius. (also, @bluto: that does seem like an oversight, but not a fundamental one considering he’s mainly talking about Tesla, which seem to have mostly overcome that).

        • Douglas Knight says:

          Who says that there is more oil being made? I only know of radicals like Thomas Gold, who say that there is an awful lot of oil out there, not that there is more being made.

        • Brad says:

          It’s been awhile but ISTR there are two “unlimited” oil camps: abiotic and deep hot biosphere.

          • Douglas Knight says:

            I would call the second a sub-camp of the first. I think Gold would, too. “Abiotic” just means original source of the energy is not photosynthesis. If there were multiple alternate camps divided on what that source was, that seems like a big division, but they all say that it is primordial methane. The difference is in the subsidiary question of how it was transformed into petroleum, whether by processes geologic or biologic. (But maybe you’re talking about people I don’t know.)

          • Pku says:

            Why would this imply unlimited oil though?

        • Edward Scizorhands says:

          More complaints.

          1. Letting “the ICE is old-fashioned, not new-fashioned” do far too much work for him. I feel like I’m reading a Thomas Friedman article.

          2. All the politics in an article he said wouldn’t be about politics. I nearly gave up at this point. I’m glad I didn’t because he stopped talking about it (until later when he gets right back into it).

          3. He still thinks battery swap is a thing. WBW published this article 1 week before the tech press revealed that Tesla’s battery swap was dead. Okay, he can’t predict the future, but his article is about predicting the future. If he can’t predict something 5 days out, why am I trusting him on 15 years out?

          4. This paragraph.

          “It’s a rule of thumb in the car world that every $5,000 decrease in car price approximately doubles the number of buyers who can afford the car. So if Tesla can somehow come out with a stellar EV for about $35,000 less than the Model S, it would double the buyer pool 7 times, or multiply it by 125-fold.”

          5. The insistence that EV cars will only improve while never wondering if ICEs will similarly improve. Especially since he earlier talked about how “technology doesn’t improve unless pushed” and “gas car companies are fat and lazy.” Using his own logic, they will absolutely respond to competition by improving their own products.

          6. Talking about how “this is life or death” for the oil companies isn’t really supported. He says “in the US, 71% of extracted oil is used for transportation, and most of that is for cars,” but I did a bunch of digging on the web and “most” is 60% (http://needtoknow.nas.edu/energy/energy-use/transportation/), so we have maybe 43% of the US petroleum usage going to cars. And obviously there is going to be some CO2 cost for the electricity for EVs, and obviously there would be a substitution effect where some other use would be found for a good portion of that oil if it really became available on the market.

          7.”[Batteries] are almost all recycled anyway.” No, getting lithium from the ground is still cheaper than getting it back out of a battery. WDW seems to have not even questioned his own supposition here. The lithium in a battery can have negative value, since it requires more energy to get the lithium out than the lithium is worth.

          8. “You could replace 70% of US gas car miles with EV miles with no changes to the grid.” What about local neighborhood transformers?

          9. He strongly implies that the new dominance of natural gas over goal is permanent.

          10. “I kind of think the only way someone could feel positive about a gas car future is if they’re misinformed, personally financially interested in gas cars, hopelessly old-fashioned, drunk with politics, or kind of just being a dick? Right?” He’s not being sarcastic here. I went into the article hoping for EVs to win. Now I’m just pissed.

          11. Nuclear is barely mentioned. It has no CO2. But he says what we want is “sustainable” power. Why? Is nuclear part of this?

          12. He never says how much changing all American cars to EVs would improve temperature. Last I heard was that American cars were responsible for 0.02 degrees of warming. There is no sense of the trade-offs at all.

          The article presents itself as some dry neutral logical piece. It mostly comes down to failed expectations. If it was “an argument for EVs” I would probably just file this away under “decent argument for EVs.”

          • Anthony says:

            Without commenting directly on the article, or your other comments:

            8. “You could replace 70% of US gas car miles with EV miles with no changes to the grid.” What about local neighborhood transformers?

            We have a FIAT 500e. It came with a 120 VAC charger, and there are options for a 240 VAC charger. I assume Teslas would be similar – at the very least, have a 240 VAC charger available. 240 would not require changing a transformer, merely some rewiring at our electrical panel. Most houses in the U.S. have 240 VAC available, with wiring to deliver 120 to most outlets.

          • Pku says:

            I more or less agree with 1-11, but I was reading it more as “guy who’s excited about EVs talking about how cool they could potentially be” (and he did mention going in that he was pretty pro-Tesla). About 12, the point I got was that having mostly EVs would drastically reduce city pollution and noise. (I’m not sure if that’s actually the point he was trying to say, but it was pretty exciting).

          • Douglas Knight says:

            Anthony, if the answer were so simple as that, where does the 70% come from? WBW agrees that eventually changes to the grid are needed.

          • HeelBearCub says:

            @Douglas Knight:
            I think the question is “how big is the marginal effect”?

            I mean, there are absolutely are marginal effects. Some neighborhoods (i.e. distribution areas) will be affected by adding the extra load.

            But consider that charging time is mostly off peak. So the question is a) Will off-peak added load over-top peak load in any distribution networks? and, b) How much peak load will be added such that neighborhood transformers will be pushed over the edge?

            I’m guessing that the marginal effect on transmission and distribution is small, and that added off-peak load won’t require too much in the way of extra generation capacity. Added aggregate fuel demand and extra-maintenance are probably the biggest factors.

          • Anthony says:

            Douglas, I’m replying to Edward Scizorhands’ question, not the original claim. ES wonders if (changes to) neighborhood transformers would be necessary before that 70% threshold is reached. In general, no, because (almost) everyone gets 240 and you can charge a Tesla on 240.

            I assume that the original claim is based on the idea that once 70% is reached, there will be significant upgrades needed to get that much power out to all those charging cars. I suspect that’s a little simplistic, since in some areas, upgrades will need to happen well before 70% adoption of EVs, while in others, there’s already enough capacity to support 100% adoption.

          • Douglas Knight says:

            Anthony, the number 70% has nothing to do with the number of houses that get 240. Rewiring to get 240 is a trivial change, unrelated to what we mean by “change to the grid.”

            HBC, arranging off-peak charging might, itself, require change to the grid. California peak is in the evening, just when people might want to recharge their cars. Currently the cheapest prices are during the night because of base-load plants like nuclear and coal. Cars could cooperate in the prisoners dilemma and spread out their charging through the night. Large adoption of solar will change peak availability. Already, Germany has negative electricity prices at noon on many days. If everyone charges at noon, that requires change to the grid. If everyone charges at noon from their employer’s solar panels, that bypasses the grid. Ultimately, that’s a good solution, but it requires a lot of coordination and the path to get there might not be pretty. This is further exacerbated by Green desire to completely eliminate base-load plants. This would make the price of electricity swing more wildly and concentrate the use of the grid. Although the revealed preference of the German Green party is to replace nuclear with coal.

            [Actually, I guess I’m saying that electric cars are a solution to the grid stress caused by solar power. Electric cars that only charge at night don’t cause grid stress, if they can coordinate their charging. But is it realistic to ask people not to charge during the day, or even the evening? Well, it’s moot, because solar is coming.]

          • Edward Scizorhands says:

            Anthony, the number 70% has nothing to do with the number of houses that get 240. Rewiring to get 240 is a trivial change, unrelated to what we mean by “change to the grid.”

            Thank you. I’m not talking about in-house wiring. I’m talking about neighborhood transformers. Google “electric car neighborhood transformers” and see the variety of articles talking about how neighborhood transformers (things that serve maybe 3-7 houses) would suddenly have their load significantly increased. Each car is like another 2-3 houses. (240 volts * 10 amps = 2.4 kilowatts, which is roughly 2.4 houses. And 10 amps is weak. It would take 25 hours to charge a 60KWh Tesla S at that rate.)

            Plus those neighborhood transformers were built with the expectation that the load would be less overnight and that they would get to cool down overnight.

            California peak is in the evening, just when people might want to recharge their cars

            As you say, pretty clearly cars would wait to do serious charging. Most users don’t care about needing their cars until morning. And the slower you charge the car, generally the less waste to heat and the less depreciation on the battery. So people and software will naturally trend to ‘charge the car at the rate that leaves it fully charged 30 minutes before I expect to leave in the morning.’

          • Held in Escrow says:

            Frankly, the rise of the electric car is a boon for utilities, mainly because you can use them to balance demand on the grid. Much of the worry isn’t about there being too much demand on the grid but rather not enough! See CAISO’s duck curve; if you can set it up so cars charge during the day in their parking lots you can deal with the additional solar on the grid much easier. You can also use the cars as demand response during the night, pushing a bit of power back onto the grid I believe from each in order to deal with spikes in demand.

      • Eli says:

        I have a similar problem with Vox – I find myself agreeing with (or at least, finding several interesting points in) the first 90%, then getting lost at “therefore, the world’s problems are all because of racist old white dudes”.

        Could you perhaps label the part where they actually say, “therefore, the world’s problems are all because of racist old white dudes”? Because I’ve never seen that part.

    • Who wouldn't want to be Anonymous says:

      The most obvious problem with the article–setting aside the absurd nerd stereotypes–is that it perpetuates the second biggest American politics sin and conflates rhetoric and substance. The parties both spew an awful lot of rhetoric about what they stand for, but in practice they are both incoherent amalgams of competing interests.

      No politician from Nevada will ever green light Yucca Mt and no politician from Penn will ever say a bad word about coal power plants, no matter how much they may otherwise talk about sustainable energy or whatever. No politician, ever, would dream of pooh poohing farm subsidies, no matter how much they talk about big government or subsidies distorting the marketplace.

      Rhetoric and substance are completely divorced.

      • “No politician, ever, would dream of pooh poohing farm subsidies, no matter how much they talk about big government or subsidies distorting the marketplace.”

        Let me offer a little data contrary to that claim.

        A very long time ago I spent a summer as a congressional intern for Tom Curtis. He had introduced a bill to abolish about half of the farm program, the Agriculture department had produced a strikingly dishonest report on the consequences of abolishing it, and one of the things I was doing was critiquing that report. I don’t think he expected his bill to pass and of course it didn’t, but I think it counts as a politician pooh poohing farm subsidies—and a politician from a farm state at that.

    • I came expecting a useful critique of nerd political reasoning and left wondering where the last 10-15 minutes went. 🙁 The author should consider some of the things nerd culture dislikes about the left, like for example post-modernist emotional greenhousing.

      • alexp says:

        post modernist emotional greenhousing?

        • Anonymous says:

          I second alexp’s confusion, what does that phrase mean?

        • Zorgon says:

          I’m gonna hazard a guess – using post-modernist linguistic concepts to cause emotive cues to effectively reflect in on themselves rather than exhausting themselves in the usual fashion, causing endless vast eruptions about relatively minor things.

          Am I close?

        • 27chaos says:

          I like that phrase! I think Zorgon made a good guess at its meaning, I was thinking something similar. It’s about getting people’s sense of outrage to grow, like a giant evil plant.

          • This phrase generated more interest than I intended, but you’ve basically got the greenhouse part – I got it from some psychology book (can’t remember which). Rather than just “having” your feelings normally, you sit around celebrating you and your ingroups feelings and how good and legitimate they always are, so they artificially grow like a plant in a greenhouse. With SJW I think you kind of get an outrage plant, whereas at least with old fashioned hippies you get a mostly nice plant.

            The post modernist bit means that when you tell them its a demonstrably bad idea to grow outrage plants, and that outrage plants have been proven to kill off logic plants, they tell you that you should push your anti-feelings modernist “truths” on them. That’s just your “oppressive metanarrative”.

            I lean towards egalatarianism and do support the left on several issues (eg. environment), but the left’s postmodernist turn is pretty tragic imho.

      • Eli says:

        Postmodernism hasn’t been the defining trend on the Left since, like, 2008. Go find a Jacobin reading group or something.

        (Seriously, Jacobin reading groups are a lot of fun!)

  3. Janne says:

    If r/china is anything like r/japan, probably 95% of commenters are really americans. And the natives you do find will be a highly anomalous subset by virtue of hanging on a foreign English-language forum. r/china may be many things, but I doubt it will teach anything at all about China or chinese attitudes towards anything.

    • DanielLC says:

      The first comment translates the article and some of the comments. I assume he was referring to those comments.

  4. Vox Day published my one and only work of SF, after being more helpful at stimulating me to write at publishable quality than any other editor in 40 years.

    I judge that Vox Day is insane, in the strong sense that his machinery of belief formation is seriously deranged..Vox knows that I believe this, and he even understands why I believe this. And he treats me well anyway.

    My first published story came damn close to winning a Campbell (third after No Award and one human; the No Award votes were political). It probably wouldn’t have been on the ballot but for Vox Day. If it had been on the ballot without being marked “Puppy”, it would almost certainly have come in second, and I think it’s fairly likely it would have won.

    But, I am rather doubtful that I would have actually deserved the award had I won it. Yes, my story was good solid work, but award quality? Wearing my SF-critic hat I have to say “no”. Thing is, I have my own voting bloc that’s not the Puppies … by reliable report, a lot of geeks were jumping at the chance to vote up “ESR” for tribal but entirely non-political reasons.

    The world is a complicated place. Beware of easy answers and simple explanations.

    • lmm says:

      Ok, I have to ask: what does “puppy” mean in this context?

      • jaimeastorga2000 says:

        The Sad Puppies and the Rabid Puppies are related movements of science fiction writes and fans dedicated to taking back the Hugo Awards from the SJWs by encouraging non-SJWs to sign up and vote.

        • pneumatik says:

          Of interest is that there were just under six thousand ballots cast for the Hugos this year, even with all the public controversy. I found that number surprising, given the amount of effort that went into influencing those votes.

          • asdfghjklsdfghjkl says:

            Becomin a member with voting privileges costs money and involves paperwork, online commenting requires neither, so its not so surprising.

          • Anthony says:

            What’s more interesting is that there were over ten thousand paid members, while there were only six thousand ballots.

            On the other hand, the $40 for a supporting membership got you more than $40 worth of ebooks. So maybe some people just joined to get the Hugo Packet.

      • Nita says:

        There were two groups dissatisfied with the Hugo Awards: “The Sad Puppies” and “The Rabid Puppies” (the latter led by Vox Day, the author of “SJWs Always Lie” mentioned in the OP).

        They tried to stage a coup of sorts by coordinating their votes at the nomination stage. That worked really well, and then others retaliated by voting against everyone and everything nominated by the Puppies. Arguably, the outcome was negative for all involved, with the possible exception of Vox Day.

        • Whatever Happened to Anonymous says:

          It was all really funny, though.

        • kerani says:

          They spent the last three years trying – with increasing success – tried to stage a coup of sorts by coordinating their votes at the nomination stage. That worked really REALLY well this year, and then others who had been okay with, or ignorant of, previous coordination by either Sad Puppies or other entrenched interests retaliated by coordinationg voting against everyone and everything nominated by the Puppies. Including authors and works which were on the side of the entrenched interests in other years. Because coordinating voting is bad, and the people who do it must be punished. Arguably, the outcome was negative for all involved, with the possible exception of Vox Day.

          Given what he’s said publicly, I think VD anticipates enjoying next year a lot more than this year. The Sad Puppy leadership spent a lot of this year talking VD out of the least charitable of his impulses. I get the impression they’re not going to try so hard in the future.

          My sympathies are almost entirely with Sad Puppies, but there are people on all sides who need refresher courses on the differences between “my side is awesome!” and “your side stinks!”

          • Peter says:

            It is interesting that you don’t mention the distinction between the nomination system and the final vote, and how the two have very different properties with regards to slate voting.

          • Anthony says:

            Peter – both the nomination system and the final voting system are susceptible to slate voting, but in slightly different ways. The main difference between the nominations and the final votes this year was that the “left” side didn’t mobilize until the final vote, when their slate swept most of the more prestigious categories.

          • Peter says:

            Anthony: Explain. In particular, justify the “slightly”. In your answer, bear in mind that the final voting system isn’t straight AV, but slightly modified so that there’s a run-off – the eventual winner has to beat No Award in a two-horse race. So how does co-ordination get an unfair advantage there? Even in things like the FBest Novel category – if you look at the first preferences, things were pretty evenly divided between the three non-Puppy novels, all three got handily more first preferences than the Puppy ones. So I see no need for co-ordination there either.

            “Anything but the Puppies” isn’t a slate; it’s the exact opposite of a slate.

            The second part – I’ve seen the nomination results, I’ve spent quite a while looking through them. You have your idea for a “main difference” – that’s not the picture I get. Apart from the voting system, the other big difference I see is that the turnout for nominations is a lot lower than the turnout for the main vote, year on year.

          • Anthony says:

            Peter –

            The effect of slates on Single Transferrable Vote should be obvious; what happened in this year’s Hugo Awards was that the Puppies slate wasn’t nearly as large numerically as the No Award slate.

            Look at the race for Position 2 in Novella – there’s effectively a John C. Wright slate. It’s not quite enough to bump “Flow” out of second place, but it’s close.

          • Peter says:

            Do you even understand the complaints about slate voting in the nominations?

            I’m looking at the Novella stuff right now. There do indeed seem to be a bunch of John C. Wright fans… less so than at first glance, when I look at the numbers, but whatever. This is AV working more or less as intended. If a majority (of those still expressing a preference in that round) wanted a John C Wright, a majority would have got a John C Wright. No problem – this is democracy in action. The fact that the John C Wright vote was initially split over three candidates wasn’t a problem – AV meant that the vote found its way together again.

            Contrast with the nominations – let’s use the novella noms now we’re talking about them. The two Rabid-only picks were on about 15% of the ballots, the three Sad+Rabid picks were on about 30% or so. So we’re not seeing majority support for any of them. The remaining 70% or so of the electorate had their choices spread out over a wider range of candidates. That 70% was split over a range of different candidates, and unlike in AV where vote-splitting is much less of an issue, here that vote-splitting meant that the majority got zero out of five slots. This is IMO a big difference and why “slightly” is not appropriate here.

      • Let me explain—no, there is too much. Let me sum up:
          Literary status envy makes puppies sad.
          ‹insert picture of sad puppy
          Do your part to prevent puppy-related sadness by nominating and voting for good works of SF/F in this year’s Hugo Awards.

        • 27chaos says:

          Also, attempts to depoliticize something will always be vulnerable to accusations of attempting to make something more politicized than it ever was. Sometimes these accusations will have merit, and sometimes they will not, but it is hard to tell and everyone will have a strong opinion anyways.

      • Peter says:

        A controversy which has been discussed ad nauseam in a variety of places, including a few threads here. Don’t trust what anyone here (or anywhere else) has to say about it – to echo ESR, beware of easy answers and simple explanations. There are all sorts of odd complications; the first being the Sad/Rabid distinction – the Sad movement is older, the Rabids seem to have commanded larger numbers and generally been more disruptive (or effective, if you want their spin rather than mine).

        My personal favourite on the issue has been George RR Martin’s series of posts on his Livejournal, but a) I would say that, I’m a moderate anti-Puppy, and b) there are some things I have to disagree with.

        • birdboy2000 says:

          Martin’s been attempting to be fair – haven’t seen intentional strawmanning or anything from him – his views are close to my own, and his posts an excellent explanation of why many in SF+F fandom, especially those who reject the nasty, hateful brand of identity politics the Sads and Rabids accuse all their critics of, nonetheless came down anti-puppy. (Eric Flint’s are too, and his views are basically my own, for that matter.)

          But good as he is, if you want to understand where the sad puppies are coming from there’s no substitute for Larry Correia’s own writings on the matter. I can not say “read both sides on anything controversial” enough.

          • Peter says:

            I’ll give Correia this – while I don’t agree with the campaign he started, he does have quite a lot of good points, and there have been times I’ve been happily agreeing with him (here on SSC and everything); e.g. on his takedown of K. Tempest Bradford’s campaign. So, seconded, he’s well worth reading.

            Yes – there is a nasty, hateful brand of identity politics, it can be found in SF circles, but despite all, it is indeed possible to recognise all that and still come down anti-Puppy.

            Oh, and Flint; I’ll recommend his “In Defense Of The Sad Puppies” post.

    • Whether you affiliate with SP4 or (more likely) not, I hope you’ll open a conversation on A&D for fellow geeks to suggest recent SF/F works not afflicted with literary status envy—and remind them that everyone who registered to vote in 2015 will be eligible to nominate in 2016.

      • Anthony says:

        Seconded. Especially since far too much of the Puppy argument was pretty much tribal political posturing. While I have no illusion that the left would actually take the Puppy argument any more seriously or fairly if the discussion was about literary status envy, it would be more interesting for me, and I suspect some actual neutrals might be persuaded.

      • brad says:

        I think there’s a false dichotomy in there.

        The likes of Correia, on the other hand, churn out primitive prose, simplistic plotting, at best serviceable characterization – and vastly more ability to engage the average reader.

        Such people prize the “novel of character” and stylistic sophistication above all else. They have almost no interest in ideas outside of esthetic theory and a very narrow range of socio-political criticism. They think competent characters and happy endings are jejune, unsophisticated, artistically uninteresting. They love them some angst.

        The first paragraph describes low brow art, the second high brow art, but missing is middlebrow. There’s no reason you can’t have excellent literary execution (prose, characterization, plotting) *and* the SFF virtues of world-building and discovery. That’s the kind of work that would be most worthy of a Hugo, neither McDonalds nor elBulli but Peter Lugers.

        • TheNybbler says:

          I suspect there’s a reason the specific term “novel of character” was used. The SF genre was in some sense born with the dispute over the primacy of the “novel of character” vs the “novel of ideas”. H.G Wells championed the latter; Henry James the former. James won, the “novel of character” became the standard of mainstream literature, and the “novel of ideas” was sent to the genre ghetto. So to call something a “novel of character” is to say it’s the opposite of SF.

          But you don’t have to look so hard to see that something is rotten in Hugoville. You need only look at two series, the Honor Harrington series and the Miles Vorkosigan series. They’re both military SF. They’re both the same “brow” level. They’re both extremely popular. The novels from both are even published by the same publisher. Yet the Vorkosigan series has been showered with awards and nominations and Harrington’s got nothing.

          • Hmmm. I’ve read neither, but I’ve at least heard of the Vorkosigan series, so I suspect it may be the more popular — yes, that might be because of the awards, but I think it’s because the author has other published work which I have read.

            It wouldn’t seem surprising if there were an “author effect” in the Hugos, much like a film with an already-famous actor or director is likely to be better known than a similar film without either. I’m not sure whether or how that fits into your argument. Just a thought.

          • Rauwyn says:

            (Disclaimer: I read the Honor Harrington books a few years back, and didn’t read all of them. I’ve reread the Vorkosigan books fairly recently.) I’ve read both series you mention, and I’d disagree with you on the “brow” level, but I wonder if my disagreement is just another manifestation of the “character vs. ideas” dichotomy you’re talking about. Both series borrow a lot from historical fiction, the Honor Harrington books from something like Patrick O’Brien’s Aubrey-Maturin series, the Vorkosigan books from Georgette-Heyer-type romance novels (Regency romance?). But whether or not it’s the influence of those styles, on the character front Bujold just seemed to write more interesting characters, and the French Revolutions parallels in Weber’s books got to be a bit much for me when we were introduced to “Rob Pierre”. And on the ideas front, both have fairly advanced technology, but the Vorkosigan books focus more on the social impact of specific advances, while in the Honor Harrington books the technology seems to be all there to drive an arms race for cooler space battles. (And they are cool space battles! I like space battles! But that does seem lower-brow in a way.) I’m not exactly sure what the ideas of the Honor Harrington series are beyond “What if the French Revolution was in space, with magic space cats?”

          • John Schilling says:

            Rauwyn has it about right. Weber consistently produces good light entertainment, if meticulously described space battles against the Forces of Pure Evil are your sort of entertainment. Bujold has written one meticulously described space battle against the Forces of Pure Evil, more or less. She occasionally dials her ambitions down to the level of “good light entertainment”, and even then puts far more effort into characterization and worldbuilding and exploring clever ideas than Weber ever does. And her villains are almost never the Forces of Pure Evil.

            Bujold wins Hugos, or at least used to, while Weber merely gets lots of sales. This is as it should be.

            They both, interestingly, set their stories in enlightened hereditary monarchies that compare favorably to the various democracies that surround them.

            And I’m pretty sure that Weber does get substantially better sales, though that’s hard to pin down.

          • Andrew G. says:

            Having read (and enjoyed) both the Harrington and Vorkosigan series, I don’t see the disparity in awards to be in any way an indication of a problem with the Hugos. And none of the factors you list would lead us to expect that the two series would automatically earn similar numbers of awards.

          • Held in Escrow says:

            I’m going to have to echo John Schilling here; Weber writes a distinct type of mil-fic about the good guys overcoming evil through the power of biggatons and conference rooms. The overall setting isn’t really well crafted so much as twisted into whatever will fit his current plot, the characters after a while became fairly flat, and old foes become instant friends just because a bigger Evil is there to shoot at. It’s great missile porn and the first few books are very readable, but it’s not exactly award material.

            The Vorkosigan Saga is all about characterization and the effects of technology. There aren’t many battles or action scenes, just characters investigating, dealing with problems, talking to people, and occasionally politicking. There are themes at play, moral conflicts, and lots of subtext. It’s an award winning series because of this, even if it doesn’t sell as much because there’s not as many fun battleship fights to follow.

            In effect, Weber is about getting you to the big fights. Bujold is about trying to prevent the fights from happening. Is it any wonder the latter is going to tend to be deeper and win more awards?

          • Andrew G. says:

            As a slight tangent, am I the only person who gets really irritated by Weber’s ongoing abuse of the word “temporal”?

            (Not talking about the Multiverse books here, but rather its use in the religious context; “temporal” in this sense means the same thing as “secular”, but Weber uses it as though it meant the opposite.)

  5. Squirrel of Doom says:

    The FDA article is confusing – and a positive surprise – after this article claiming the opposite came out this week. It deals with a different time period, so both can of course be correct.

    http://marginalrevolution.com/marginalrevolution/2015/08/is-the-fda-too-conservative-or-too-aggressive.html

  6. Rauwyn says:

    If anyone’s interested in the parody book but not enough to pay money, the audiobook version is available on Scalzi’s web site.

    • Scott Alexander says:

      I hope this isn’t actually by John Scalzi. That would make it a lot less funny and more pathetic.

      • Stephen Frug says:

        Based on what he said on his site (when soliciting for donations re: the audiobook), he didn’t write the original parody. (I have no independent confirmation of this, but it’s what he says.) He did record an audiobook of it.

      • NL says:

        It’s by Alexandra Erin, who also did satirical “Sad Puppy reviews” back in May/June http://www.alexandraerin.com/tag/sprb/

        • walpolo says:

          That James May one is a really good parody of May.

          • Vorkon says:

            That’s what I thought too, and after reading it I thought to myself “wow, that sounds EXACTLY like the guy! I may not be on her side in the puppy debacle, but if she can mimic tone this well, then this Vox Day parody book might actually be worth a read…”

            Then I read through the comments, and found out that apparently it’s an actual comment that May left during an argument which her, which she just copy/pasted and then claimed was a review of The Poky Little Puppy. Still a moderately funny way to dismiss an internet comment that you don’t like, mind you, but hardly the comedy gold I thought it was when I believed she was actively mimicking his tone.

      • Gbdub says:

        Given that probably a sizable majority of fans of Scalzi the SF writer do not share his politics, you’d think he’d be more charitable to his political opponents (most of whom just seem to want the books they like, which are more like Scalzi’s than the SJ promoted stuff, to get some recognition).

        Then again on a military sci-fi panel at Comicon (on which he was the most famous author, but the only one without actual military experience) he said the reason he got into military sci-fi was basically mercenary – he wanted to make enough money to write full time, so he wrote what he thought would sell. (Again, you’d think this would garner at least a modicum of Puppy sympathy…) At the time I took this as self-deprecating humor, but now I’m not so sure. But damn if Old Man’s War isn’t fun…

        • roystgnr says:

          Scalzi’s blog drama for this morning could be summarized as:

          1) His friends vilified a cash-strapped ESL immigrant woman (it’s still “punching up” if your victim is smarter than you, I guess?) for using the word “Chicom”.

          2) She (Sarah Hoyt) defended herself and dissected their historical, grammatical, and lexical illiteracy, mentioning her Mensa card in passing.

          3) Although the original attacker has now tried to backpedal (I never called you a racist! I just said you were using an ethnic slur!), Scalzi saw this as an opportunity to pile on and accuse her of bragging, too.

          At this point, even if he had an epiphany and became a more charitable person, who would believe it? It would just look like another mercenary calculation; except that choosing a popular subgenre to improve sales was a very respectable decision whereas acting like a nice person to improve sales would not be. (although, I admit, it would still be better than the current alternative)

          • Urstoff says:

            The endless culture war is very tiring. I’m sure people that could possibly win Hugo awards care a lot, as well as the twenty or so people that regularly go to Worldcon, but as a reader I haven’t found that the Hugo’s actually indicate quality since at least the 90’s and were very inconsistent before that (does that mean I’m on a particular side of the culture war? I hope not). Redshirts, which was an absolutely awful book, won it a couple of years ago. Ancillary Justice was interesting, but the hype it received was far out of proportion to its quality. So it seems like everything is about which side gets the prestige (from whom? The occasional NPR story?).

            I can mostly avoid it, but it occasionally infects SF news/reviews sites that I like to read (no bubble is perfect, I guess). It’s just as well; I’ve started reading a lot more literary fiction where, oddly enough, the culture war is much, much less intense (maybe because one side has already won?).

          • alexp says:

            I’ve no idea about the drama that unfolded, but I have never read anybody use the word “Chicom” unironically without being at least a little racist, usually in a fantasizes about mowing down Chinese hordes way.

          • Brian says:

            @alexp

            The particular use was Hoyt commenting that SJWs probably voted for Three Body Problem because they thought the author was a Chicom. Definitely not a slur on the Chinese, though certainly nasty to Scalzi’s clique.

          • gbdub says:

            To the extent that “Chicom” is a slur, the emphasis among those that use it is mostly on the COM, not the CHI. So I don’t think “racist” is really fair (if anything assuming everyone Chinese is a Communist is rather more prejudicial).

            Re: the Hugos, it seems true of arts awards in general that they are mostly status games for the sorts of people who vote on and follow awards (an in-group clique and in-group wannabes, respectively). And in that sense the awards are as much about showing how much smarter and more progressive you are than the plebes that consume mass art.

            So, obviously, awards must go to “serious” “important” works even if they aren’t “popular”, “entertaining”, or “particularly good”. Isn’t everyone familiar with “Oscar Bait”? For the Oscars, most movie goers just seem to ignore them and go on watching what they want to, unless they can use them to flog some social issue or another (“Hollywood is too leftist!” “How come there aren’t any black transgender asexual atheist undocumented immigrant Oscar winners?!”).

            For the Hugos, I think you’ve got two issues – first, it’s a more niche community, and it’s a community that developed into a strong one on its own, without a lot of involvement (and if anything outright rejection) by the SJ folks who now suddenly want a seat at the head of the table. That’s going to breed understandable resentment from the people who were into SF before it was cool. Second, the Hugos billed themselves as a “readers’ award” while apparently actually being awarded by a small in-group that bothered to be involved. As long as the in-group more or less tracked the broader fanbase, people were mostly okay, but once the in-group started to go off on a tangent, some of the broader base went, “hey wait a minute…”

            I thought Redshirts was fun, but hardly Scalzi’s best work, and it definitely felt like a premise that could have made a great little novella but got awkwardly stretched to novel length.

            Three Body Problem was interesting, and I enjoyed it as much for the cultural aspect as the SF (and I’m fine with admitting it). But unless a LOT was lost in translation, it’s not very well written. Also, the ending sucked (minor SPOILER ALERT): most of the book revolves around solving a mystery about weird stuff happening in physics. The end of the book solves this in a way that threatens the very existense of humanity! Book immediately ends with no resolution of this stunning revelation.

          • alexp says:

            As far as I know, Three Body Problem doesn’t promote Maoism and in fact depicts the Cultural Revolution harshly. Roughly 95% of all ethnic Chinese are ‘ChiComs’ going by the definition ‘subjects of the Chinese Communist Party’ and a much smaller percentage are if we go by the definition ‘supporter of Maoist Ideology.’ I see no evidence that Cixin Liu is member of the latter category, nor do I see evidence that the anti-Puppy (I’m anti-Rabid Puppy and especially anti-Vox Day and I take great offense to being called a SJW) voters thought that Liu was a supporter of Maoism. Nor do mainstream progressives in this country support the Chinese Communist Party (they’re more likely to say “Free Tibet” or support the Falun Gong or talk about pollution in the China or condemn civil rights offense in China… basically there’s a lot progressives and SJWs don’t like about China)

            In the end, she’s a racist **** because she’s dismissing the accomplishment of Chinese man because he’s Chinese and then tries to cover it up by saying, “Oh I didn’t dismiss him because he’s Chinese, I dismissed him because he’s a Chinese Communist” but without any evidence that he’s a communist.

            Now it’s likely that some people supported him because he fit the broad category of “Non-western,” and it’s possible Hoyt was saying so in a slightly ruder manner. It’s equivalent to dismissing a Pakistani author winning an award because said author is an Islamist terrorist supporter.

          • In the end, she’s a racist **** because she’s dismissing the accomplishment of Chinese man because he’s Chinese and then tries to cover it up by saying, “Oh I didn’t dismiss him because he’s Chinese, I dismissed him because he’s a Chinese Communist” but without any evidence that he’s a communist.

            Anyone who read the first chapters of Three-Body Problem knows the author’s not sympathetic to the Maoists/Chicoms of the Cultural Revolution era. Sarah Hoyt was not discussing the author’s politics at all but the reading abilities of a particular few who have praised the book, suggesting they read so poorly they thought the author was a fellow-traveler of theirs and voted accordingly. I thought this was extreme myself (especially as she acknowledged that others legitimately liked the book), but up sprang a few of that set demonstrating such a complete inability to understand what she’d written that I wonder whether she didn’t have the right of things after all.

          • Randy M says:

            alexp, don’t forget Tom Friedman oft expressed China envy. I think he’s mor emainstream than Free-Tibet-ers.

          • Chalid says:

            I just looked at the relevant bit of the Hoyt post (not gonna wade through the whole thing); I’d analogize it to calling someone an affirmative action hire. Insulting to Liu, and exactly the sort of thing many minorities are sensitive to. It’s not a racist statement, but it’s the sort of thing that racists often say.

            FWIW I adored Three Body Problem; definitely my favorite SFF book of the year.

            (Disclaimer: though I have my sympathies, I’m not immersed in this whole puppy thing and am generally unfamiliar with Hoyt. I’m just evaluating that one Chicom paragraph in isolation, which may be unfair.)

          • Sarah Hoyt is an ESL immigrant woman?!?!?!

          • gbdub says:

            “In the end, she’s a racist **** because she’s dismissing the accomplishment of Chinese man because he’s Chinese and then tries to cover it up by saying, “Oh I didn’t dismiss him because he’s Chinese, I dismissed him because he’s a Chinese Communist” but without any evidence that he’s a communist.”

            Okay, here’s what Hoyt actually said that started the kerfluffle:
            “Yes, yes Three Body Problem. Well, I didn’t find it worth it, but I bet you half the people who voted for it voted either under the illusion they were favoring Chicoms OR as a slam against the puppies.”

            It’s pretty clear (see “illusion”) that Hoyt is fully aware that Liu is NOT a Communist, and is in fact making fun of people who may have voted for him on that illusion (she says this explicitly in a follow up post, and anyway nobody can read TBP and find it overly sympathetic toward Maoism). It’s also pretty clear she’s not exhibiting any ill-will toward Liu, except insofar as she didn’t find his book very good. She is NOT dismissing him for being Chinese, or Communist. She’s dismissing him for, in her opinion, not being a particularly good writer. All the ire is directed at the voters (and in the larger context of the post, which was bemoaning the “No Award” anti-puppy block voting, that’s also clear).

            So at worst she’s being insensitive in the “affirmative action hire” vein. But I think, “I read your book, and didn’t find it very good, and I think your award was politically motivated” is a bit different from just seeing a random minority in a job and labelling them an affirmative action hire.

            I just honestly don’t get how you get racism out of that statement – nowhere is she critical of Liu for being Chinese, nor does she imply that Chinese people can’t write or have any other negative qualities. Yes, “Chicom” references Chinese, but that’s mostly because Chinese Communism is a totally different beast from say Soviet Communism or North Korean Communism. “I hate Chinese Communists” =/= “I hate the Chinese race” any more than “I hate American Conservatives” means “I hate the American race”.

            I don’t even like Hoyt. I haven’t read any of her books (not a big fan of her genre), and the times I’ve stumbled across her blog I’ve found it strident and over-the-top in much the same way that her critics are accused (often rightly) of. But the fact that her statement gets her labeled a “racist *****” not worthy of engaging by civilized people and starts a huge internet pile-on is EXACTLY why the W gets appended to SJ.

          • Chalid says:

            It’s a commentary on how race-obsessed we are that Hoyt’s factually indefensible statement about Hugo voters, in which she literally calls people communist sympathizers with zero evidence, is being debated in terms of whether she used an appropriate word for communist.

          • Alexp says:

            It’s an absurd stretch to think that progressive’s voted for a book because they thought the book was Maoist. Progressives don’t support Maoism and even if you think they’re stupid robots who support Marxism unconditionally, they wouldn’t support modern Chinese Communism because it retained the authoritarian character and lack of human rights that characterized cold war regimes with full scale corporatism and crony capitalism. It’s as absurd as thinking that progressives like Malala Yousafzi because they think she supports Islamism and Sharia Law.

            Is Sarah Hoyt that dumb? Does she think her readers are that dumb? Or maybe she was associating a person’s work with a term that evokes the Yellow Peril in order to hide lazy stereotypes of liberals.

            And you may or may not think that “ChiCom” is a racist term. Just don’t call me or any member of my family a ChiCom to my face.

          • ThirteenthLetter says:

            Just as a public service announcement, you are allowed to believe her statement was ridiculous and insulting towards Hugo voters without also having to drink the entire jug of Kool-Aid and accuse her of making a racial slur. Seriously, it’s fine.

            The most dispiriting part of this whole mess was that the SJW who called Hoyt a racist was apparently completely ignorant of the term “ChiCom.” These are our self-proclaimed best and brightest writers? Oy.

          • John Schilling says:

            It is quiet plausible that progressives voted for a book because of the author’s nationality or ethnicity, and that Sarah Hoyt was mistaken as to why progressives care about such things. Indeed, I tend to believe this is what happened.

            Progressives, on average, do tend to be softer than they ought to be on the subject of communism. But the economic system they actually favor is democratic socialism, and the Chinese brand of communism is a very poor match. There are progressives who would take “socialism” all the way to classic Marxism, and modern Chinese communism would still be a poor match for that.

            But every care the modern progressive might have for economic doctrine, takes second place to to the all-important PrivilegeOppression axis. Even socialist economics is favored mostly for its ability to shift wealth from the privileged to the oppressed classes.

            So here’s a dryly written hard science fiction book about how the human race has to deal with an invasion by technologically superior aliens who are nonetheless bound by the speed of light and other pesky physical and logistical constraints that prevent their invasion from being an instant cakewalk. Also, there’s some gratuitous anti-communism up front. Lots of people, including myself, enjoy books like this, but progressives are not generally among them.

            The book’s jacket, advertisement, and most of the reviews, can spare maybe a couple of sentences to the actual plot, enough to figure out that there’s an alien invasion involved but not much more. Instead, everyone gushes about how the book is written by a Chinese Guy from China. Not a Chinese-American, those guys are practically white by now, but a genuine non-white person from a not-first-world country. Clearly, this author has lots of oppression points a Unique Cultural Perspective, because no Chinese Guy from China has ever published an SF novel in the United States before. And that makes this a must-read for all American SF fans, because Chinese Guy from China. Seriously. I was delayed for months in reading TBP because I couldn’t find anyone to reliably tell me what was in it.

            It is not at all implausible that most of the early purchases, before word of mouth got around, were based on the author’s nationality or ethnicity, rather than its content or literary merit. And for a book published in November to win the Hugo, it’s the early purchases that matter. Not all of whom will have gotten around to reading the book by the time they have to cast their vote.

            And while the voting patterns suggest that it was the Rabid Puppies that put TBP over the top, it wouldn’t have gotten within reach of victory without a block of ~2400 votes that went:

            (Book by a white woman)
            (Book by a white woman)
            (Book by the Chinese Guy from China)
            NO AWARD
            (Book by a puppy-nominated white male)
            (Book by a puppy-nominated white male)

            Even though the actual content on the Book by the Chinese Guy from China was the sort of thing progressives usually find boring at best.

            Were it possible to settle the issue, I’d bet even money that Hoyt is right about one thing: Most of those 2400 voters didn’t even read the book, but were voting its author’s ethnicity and its not-officially-puppy-nominated status.

            And much more than even money that she is dead wrong, and unkind, uncharitable, tone-deaf, narrow-minded, mean-spirited, and politically inept to have brought communism into the discussion.

          • Nita says:

            @ John Schilling

            the actual content on the Book by the Chinese Guy from China was the sort of thing progressives usually find boring at best

            Yes, your opponents’ literary taste is completely determined by their politics. Progressives HATE realistic physics, and conservatives HATE realistic psychology. It’s a fact.

          • suntzuanime says:

            Isn’t that, like, true? I mean strip your claim of the exaggeration for effect, and I bet you’ll find that there is a substantial difference in ideology between social science faculty/students/graduates and hard science faculty/students/graduates, and the social science people are further to the left than the hard science people.

          • Nita says:

            Science fiction as exploration of the future, its possibilities and challenges (as opposed to a textured backdrop with no impact on the plot) is inherently progressive.

            Here’s how the author explains the popularity of TBP in China:

            “China is on the path of rapid modernization and progress, kind of like the U.S. during the golden age of science fiction in the ’30s to the ’60s. [..] The future in the people’s eyes is full of attractions, temptations and hope. But at the same time, it is also full of threats and challenges. That makes for very fertile soil.”

            And here’s how the American editor explains why she thinks readers will like TBP:

            “The ‘Three-Body’ series sort of scratches the same itch that harkens back to the kinds of books people read when they were kids.”

            So, what do you think the “progressive” sci-fi fans were reading when they were kids — Arthur C. Clarke and Isaac Asimov or John Updike and Toni Morrison?

          • Chalid says:

            @Jon Schilling: I’d agree that a lot of the early purchases were because it was a *massively bestselling and awardwinning Chinese book*, which you’re reducing to “Chinese book.” But there’s a huge gap between getting some early buzz and actually winning.

            Anyway, this is the first non US/UK/CA-authored novel to win in… scrolling through the list, looks like decades… and the argument is that the progressive clique will favor foreign books? I guess you’ve got very strong priors.

            In particular, I think you’re leaning way too hard on the idea that progressives wouldn’t like a hard sci-fi book. If there is a connection between “liking hard sci-fi” and politics, its seems its R^2 is going to be pretty small, certainly nowhere near the levels you’d need it to be to support your argument. Look at the hard sci-fi nominees of the past couple years – you’ve got Charlie Stross (Neptune’s Brood) and Kim Stanley Robinson (2312) and they’re both very liberal, much more than I am. And of course Cixin Liu.

          • John Schilling says:

            The last non-anglospheric SF novel on the Hugo ballot was in 1963, by Jean Bruller of France. But the claim isn’t that progressives favor foreign books, but that they favor non-first-world books. And, at least in the North American science fiction market, pickings there have been pretty slim since Lem and the Strugatskys faded away in the late ’70s. I am willing to accept the promotional claims that TBP was the first Chinese SF book to be picked up by a (non-academic?) US publisher. And I’m drawing a blank on any novels by developing-world authors.

            Also, my claim is not that progressives never vote for hard SF. My claim is that it is implausible that progressives will as a block vote for a book that is A: hard SF, B: about defending the Earth against alien invasion, and C: contains gratuitous commie-bashing(*). Each of those alone has modest predictive power; all of them together are significant. Would it really have been controversial if I suggested that, e.g., relatively few Republicans voted for “Venus Plus X” or “Left Hand of Darkness” as Best SF Novel? Some individuals with eclectic tastes, yes, but not the largest identifiable block of Hugo voters.

            But, don’t take my word for it. However popular the book may have been in China, and for whatever reason, when it came time for American advertisers to promote the book to an Anglospheric SF-reading audience, they chose to promote the author’s ethnicity over the book’s contents. And it worked.

            (*) Yes, Ye needed to be alienated by an atrocity to kick off the plot, but aside from the author’s nationality there’s no reason the atrocity needs to be communism.

          • Nita says:

            Yes, a book by someone who grew up in a very different country, yet loved the same writers as you did, is fascinating.

            And I do want to read criticism of communism by the Chinese, criticism of Islam by Arabs, and criticism of the USA by Americans. It tends to contain less hateful xenophobia and more informative personal experience.

          • Chalid says:

            Going to just flatly disagree about what “most progressives” would do. I’m not sure that the conservative vs liberal factor has any predictive power whatsoever on hard SF (in space opera, certainly, but not hard SF). Commie bashing is something conservatives take more glee in than liberals but I don’t think that anyone’s really turned off by it anymore, especially as it is very distant from any traditional western liberal/conservative flashpoints.

            There’s definitely predictive power in predicting enjoyment of alien invasion/military SF, but I don’t think it applies as strongly to Three Body Problem as it might to, say, Charles Gannon’s stuff. I think, if one were to sum up the progressive impulse here, it’s that they want more kinds of stories to be told and TBP is a completely different take on alien invasion from anything else I’ve encountered.

            On the other side of things, if we’re making gross generalizations, then I think progressives would enjoy the in-depth look into Chinese culture and history more than conservatives would.

          • Gbdub says:

            “Just as a public service announcement, you are allowed to believe her statement was ridiculous and insulting towards Hugo voters without also having to drink the entire jug of Kool-Aid and accuse her of making a racial slur. Seriously, it’s fine.”

            Yes, this exactly. And FWIW, I think she intended the statement to be ridiculous and insulting toward Hugo voters. She doesn’t think her readers are dumb, she’s calling Hugo voters dumb. It’s a throwaway jibe I doubt she fully believes herself. I don’t like this tactic, and she does it a lot it seems, so I’m not a fan.

            But we’ve got alexp tying zirself in knots to prove she’s a racist, because that’s the mind killer and then she can be safely exiled to the black hole. Now we’re grasping at dog whistles, implying Hoyt was maliciously goading her ignorant readers into associating an obviously anti-Communist book with the “Yellow Peril”? Sheesh.

            If you don’t want to be labeled an SJW, don’t engage in maliciously uncharitable misreadings to justify vicious ad hominems (while making it clear you will not tolerate the same tactics from your opponents). Even when the target of your ad hominems is not very nice.

            Also, @Chalid, I think you’re right that progressive doesn’t mean “no to hard SF”. That said, it’s hard to deny that there’s a strain of progressivism that, for better or worse, engages in open racialism and promotion of members of certain groups, other qualities notwithstanding (see the study in this very post showing a preference for black candidates). Honestly I think it’s naive to assume that that didn’t play a role in TBPs win. And again, I liked the book, thought it had some great ideas, but also a lot of flaws. And I even appreciated the cultural perspective. I just have the feeling it would have been called mediocre if it was written by a random white dude (heck, even I might have thought that – to be honest the cultural stuff was what separated it from pulp for me, and if I was already familiar with the culture, I’d be left with a high concept and so-so story execution).

          • HeelBearCub says:

            @gbdub:

            Forgive me if I am treading ground that’s already been covered, but, I have a question.

            Do you think “ChiCom” is a term that can be deployed (outside of military circles in the 60s or something) neutrally or favorably? Is anyone going to say “Oh, well since you are ChiCom you might like…” to someone’s face in what is supposed to a pleasant conversation?

            To reverse the situation, if I said something like “Well, the puppies probably only like “The Sparrow” because of the blood drinkers” can that be employed in a way that is only offensive to the puppies and isn’t a slur against Catholics? (No idea if the puppies would like The Sparrow, probably not though).

            It seems to me you are claiming that because the primary target of the criticism is not (necessarily) directly insulted by something, that the word insult can’t be considered a slur.

          • If there’s a term for Maoists (not Chinese, Maoist) worse that “Maoist” or “Chicom”, I would greatly prefer to use that term, just as I prefer “Daȝesh” (= Da3esh or Daesh), a name that entity despises, when referring to the so-called Islamic State. Yes, it’s a slur; no, it’s not racial.

          • AC says:

            Gbdub wrote:

            If you don’t want to be labeled an SJW, don’t engage in maliciously uncharitable misreadings to justify vicious ad hominems (while making it clear you will not tolerate the same tactics from your opponents). Even when the target of your ad hominems is not very nice.

            One hopes that you are self aware enough to recognize the inherent tension, if not outright irony, in this paragraph.

          • Vorkon says:

            It’s also worth pointing out that, (excuse me for playing armchair psychologist here for a minute) having grown up in a Soviet-controlled country and hating it there, I’ve found Hoyt tends to have a strong negative knee-jerk reaction towards Communism.

            It may be true that she’s misrepresenting her opponents views, but at the very least it might explain why she would say “they voted for him because they were under the illusion that he was a ChiCom,” rather than “they voted for him because he was non-white.”

            It’s also worth pointing out that, while her accusations of anti-puppy voters being supporters of Communism are technically unfounded and are probably irrelevent, I think it’s safe to say there are more Communist supporters among the anti-puppies than the puppies, so it’s not entirely out of left field.

          • alexp says:

            I’m trying to call some out some microaggression or problematic cultural appropriation. I’m calling out a racial slur. If you call me or anyone in my family a ChiCom to my face, I will punch you in your face. It’s as simple as that to me.

          • John Schilling says:

            having grown up in a Soviet-controlled country and hating it there, I’ve found Hoyt tends to have a strong negative knee-jerk reaction towards Communism

            Sarah Hoyt grew up in Portugal, which was a founding member of NATO. I do not think one can reasonably call it Soviet-controlled, no matter how strong the local flavor of European-brand socialism might have been.

            Not that it should matter. Growing up in Cambodia under Pol Pot would justify a visceral hatred of Communists but not falsely accusing innocent third parties of being communists or communist sympathizers, any more than growing up in 1930s Germany would justify accusing innocent third parties of being Nazis.

          • Vorkon says:

            Oh, huh, I guess I’m an idiot. Sorry about that!

            For some reason I thought it was Poland. I guess I was confused by a vaguely remembered post I read by her a while back about growing up under Socialism, and assumed she was talking about something a little more extreme.

            Anyway, you’re absolutely right; it certainly doesn’t excuse her knee-jerk unsubstantiated accusations. I was just trying my best to explain them. I find it’s usually best to try to understand what people are thinking, and do your best to walk at least a few feet in their shoes.

            Either way, sorry again for misleading anybody! It was unintentional.

          • Jaskologist says:

            As a general rule, it’s a bad idea to extract connotations out of the words used by people for whom English is their second language.

          • Nita says:

            Hoyt happens to be a professional, award-winning writer. According to her Wikipedia page, she’s the author of dozens of works written in English, speaks five other languages, has a degree in English and is a member of Mensa.

            So, her choice of words is likely more well-informed and deliberate than most native speakers’.

          • houseboatonstyx says:

            @ ThirteenthLetter

            My reply to you was marked as spam, so I’ve put it up at my LJ:

            houseboatonstyx
            livejournal
            com/274375.
            html

        • Brian says:

          I was a fan of Scalzi’s sci-fi; he’s one of a very few sci-fi authors who do humor well. Agent to the Stars and The Android’s Dream were excellent and very funny. Old Man’s War is great, and if I tried to pin a political label on the author’s viewpoint from there, it would be “JFK-like military realist.” One throwaway character looks in hindsight like a perfect John Kerry expy, and not in a flattering way. The first few sequels were good too.

          Then at some point he turned into his current SJW mode and his writing got a lot weaker. His reboot of Fuzzy Nation was quite good, if a little too much in the “businessmen are evil” mold. Redshirts started out funny, but turned into a maudlin mess by the end. And his last two sequels in the OMW universe just came out weird and confusing.

          • walpolo says:

            Old Man’s War is the worst science fiction novel I’ve read since The Jedi Academy Trilogy. The first 100 pages are literally nothing but infodumps and fart jokes. And no, he does not do humor well. (At least not in OMW)

          • Robert Liguori says:

            So, it’s a slightly different audience than most people here follow, but…does anyone here follow Sinfest? Did anyone else track the Great Transition of a few years back?

            I kind of wonder if this is a thing that we can consider a general pattern. Like, an author (for whatever reason) gets involved with a subgroup, starts pandering to them, goes from warm approval from many people to high approval from just the group, and stays with producing media for that group for some time.

            I wonder if it’s an effect of the activation energy for commercial works. Like, do people need to sell a lifestyle experience these days? Does everyone need to find a group that strongly believes in supporting their side of a culture war, because they’re the ones who buy the most books and comics the most reliably?

            I dunno. It would be really hard to get confirmation or do research on this, for obvious reasons of self-confounding.

          • Protagoras says:

            @ Rober Ligouri, I followed Sinfest before the Great Transition. I actually thought that in addition to being generally amusing it sometimes had some good political content before the transition (I’m a liberal, feminist, etc. myself) but became much too heavy handed and completely unfunny afterwards. I have no idea who the current audience are.

          • Where Little Fuzzy had mining for gems unique to that planet, Blurry Nation had mining for coal to export to Earth for energy; I count that as “failed physics forever”. Also, stripping all characters of any sympathetic traits does not for an enjoyable story make. De gustibus and all that, but anyone claiming to have liked Scalzi’s reboot is not someone whose recommendations I personally am likely to agree with.

          • rsaarelm says:

            I get a very cynical feeling from reading Scalzi’s books. There’s very little sense that he is writing about anything like a future he’d actually believe in, either directly or obliquely, only that he’s treating the existing SF corpus as entertainment tropes he mixes and matches to assemble a product with a desired focus group appeal.

          • @rsaarelm: very much classic SF/F from the sounds of it. Traditionally, at least, the primary purpose has always been to entertain.

          • Cauê says:

            I’ll echo Protagoras about Sinfest (only observing that “I’m a feminist” does not transmit much useful information about one’s beliefs nowadays).

        • ThirteenthLetter says:

          “Given that probably a sizable majority of fans of Scalzi the SF writer do not share his politics, you’d think he’d be more charitable to his political opponents”

          We’ve been seeing this a lot lately, not just from egotistical authors. The privilege of lustily kicking the outgroup at every opportunity is worth sacrificing X percent of one’s audience/ad revenue/sales, as well as the continuing ongoing expense and annoyance of dealing with any retaliation the more angry outgroup members might dream up. The normal bloodless capitalist incentives are helpless before this scenario.

          • Robert Liguori says:

            Is that the case, though? I feel like we’re seeing something like the hyperspecialization of niche porn genres, driven by the same market forces. My thought is that in this day and age, there’s so much competition in the marketplace of media for your consumer dollar that if you’re not the tip-top of the iceberg in terms of quality, you can fall by the wayside. A thousand people might read you or watch you, but if only five of them buy your material and passes on good word of mouth, that’s not sustainable.

            But if you go niche, if you pick a group that will reward you for catering to their very specific tastes, large enough to have a sizable membership, but small enough that you can compete in the arena, then you can sell to a devoted audience. You can see this effect with the aformentioned hyper-specialized porn, and to another degree with things like Christian Rock, where consuming one tribally-coded type of media becomes a tribal signifier, which becomes all the stronger when that media isn’t very good.

            Going the route he’s gone certainly hasn’t been a hinderance to Scalzi’s career, at least.

          • walpolo says:

            It’s a good question, but I strongly doubt that Scalzi had the profit motive in mind when he started bringing politics into his blog, and he doesn’t write the sort of work that you would expect to draw a left-wing or SJ audience, so it’s really a bit of a mystery how he makes the business work as well as he does. It’s certainly not because he writes well!

        • LHN says:

          @gbdub: [Scalzi] said the reason he got into military sci-fi was basically mercenary – he wanted to make enough money to write full time, so he wrote what he thought would sell. (Again, you’d think this would garner at least a modicum of Puppy sympathy…) At the time I took this as self-deprecating humor, but now I’m not so sure.

          I think it’s self-deprecating humor with a layer of truth to it. I read Scalzi’s online writing a bit before his fiction career started. He was a columnist in my college newspaper, later the editor[1], and while I didn’t really know him he was a friend-of-a-friend. So when his name appeared I tended to notice.

          He was doing corporate writing that he seemed pretty happy with, with fiction writing as a hobby that he didn’t really consider a serious option, since he didn’t think there’d be any money in it. IIRC, he more or less said “milSF is what sells, I don’t really want to write milSF, oh well”. (Which struck me as an oversimplification of the field at the time. But there’s no time in which freelancing fiction has been a reliable ticket to a middle class income in any case.)

          He’d written one novel, “Agent to the Stars”, about first contact between an alien and a Hollywood agent, which he put up free on his website. (I liked it better than much of his more successful work, FWIW.) Then he found he had an idea for a milSF story after all. But he still didn’t think it was worth the trouble of shopping it around, so he serialized it on his website.

          That of course was “Old Man’s War”. Patrick Nielsen Hayden read it online, made him an offer for it, and the rest as they say is history.

          So while I don’t know anything about his internal process, it’s probably true that his subsequent concentration on the Old Man’s War universe and the military subgenre is the product of commercial considerations. But it wasn’t a calculated market analysis so much as “okay, if this is what they want, I’ll give it a whirl” (without even really planning to sell it initially) and having it turn out to be unexpectedly wildly successful.

          • Held in Escrow says:

            Scalzi is an interesting writer. I think he’s one of the best out there at coming up with fun and interesting plot bunnies. Agent to the Stars was an enjoyable take on the Christopher Buckley styled novel. Redshirts had me chuckling in glee with the frenzied antics of people who realize they’re super expendable. Old Man’s War, despite some Mary Sue issues, tended to have solid scenes which kept you hooked.

            My issue with Scalzi is that he runs out of power halfway through his works. It doesn’t matter if it’s a short story or multi-book series, he always wants to try to change the equation around the climax and it always falls flat on its face. Old Man’s War was lucky in that the first book avoided this by letting the big change happen later on (and was quite possibly one of the biggest flubs in popular fiction going from the reveal at the end of book 2 to the start of book 3). Redshirts goes full idiot with its fourth wall bullshit. Agent to the Stars has a distinctly unsatisfying way of resolving its premise partially thanks to the “oh god” implications. Scalzi just can’t really follow through on a concept well enough.

            I suppose he’s also fallen on the opposite end of the spectrum with Lockin, which just futzs around with the premise and never really goes anywhere, but I put that down to writing detective fiction to be really hard.

        • The reason The Three Body Problem had an unsatisfactory ending is that it’s the first book in a trilogy.

          As for why people voted for the book, I’ve gotten increasingly sensitive to claims of mind-reading. There’s plenty of that coming from all sides.

  7. suntzuanime says:

    Regarding the Pakistani Mrs. MD problem: Stop subsidizing medical education? It took me a while to figure out why it was any more of a problem than any other damn fool zero-sum status game people spend their money on, but when I got to the line about “millions of rupees on subsidies per student” it was apparent that the government is interfering in the market and somehow this has had unforseen side-effects which nobody could have seen coming.

    If you’re going to subsidize something at least you should subsidize what you actually want, which is working doctors, not doctors-in-training. Give ’em student loans and make ’em pay them off with inflated doctor salaries, isn’t that what we do in America?

    • Scott Alexander says:

      Assuming there are still a limited number of spots in the education system, and if rich husbands are happy to pay off their wives’ medical school debt, isn’t it still possible that most spots will be filled by marriage-seekers?

      • suntzuanime says:

        Why are there limited spots in the education system? Why don’t you build more medical schools when people are so eager to have a medical education (for whatever reason)?

        • Pku says:

          Hell, you could charge them extra and use this to help fund the health system.

          • Deiseach says:

            (A) package consisting of an admissions evaluation, a set of exams and a credential, all credibly attesting to the properties that employers want certified.

            This package can be produced and distributed far more cheaply than universities do it today

            I think we agree that higher education is not purely about getting an education, but what you underestimate is the value of having attended not simply University of Fishtown (formerly Fishtown Polytechnic) but having made it into University of Oldville, where the snob value is all part of the package.

            First, the idea that Oldville provides a superior product (which it very well may do; how much research does it sponsor? Has it any Nobel winners on its list of past and present faculty?)

            Second, the idea that Oldville is expensive, exclusive, and hard to get into. Anybody with a student loan can get taken on at Fishtown, but Oldville requires that bit more. A graduate from there is assumed to have all these extra characteristics.

            Thirdly, networking and yes, old school tie. The jokes about the current Tory government being all Old Etonians may not be quite accurate, but there is undeniably a network of “friends of friends” who were all at the same university.

            You could have fifteen new types of providing accreditation and checking qualifications, and people would still be competing to get into a certain number of universities, and there would still be the perception that the new degrees were second-rate and inferior.

        • Dan Simon says:

          More generally, when a scarce good (such as medical education, or higher education in general) is overpriced because people are charging cartel rents for it, the obvious solution is to find ways to increase the supply. The key issue here is to understand the actual good, which is not an education–that’s already available from lots of sources for relatively little money–but rather a package consisting of an admissions evaluation, a set of exams and a credential, all credibly attesting to the properties that employers want certified.

          This package can be produced and distributed far more cheaply than universities do it today. But rather than encourage such cheaper alternatives, federal and state governments seem to be largely wedded to the “accredited post-secondary institutions” cartel, which then does the natural thing and collects rents from everyone who depends on its services.

          Ultimately, though, a divorce is inevitable. Consider, for instance, Western Governors’ University, an online multi-state government institution granting degrees in a small, easily served set of fields for which the conversion from degree to employment credential is fairly straightforward. I believe it’s just a matter of time before that model spreads to enough fields that job-seekers will no longer consider a degree at a traditional university necessary to impress employers. And once that happens, the problem of college education being too expensive will magically disappear.

          • John Schilling says:

            Right, but the “employer” for 67% of female Pakistani MDs is a husband who wants a wife who can sit at a dinner table with a bunch of smart, well-educated men and gain their interest and respect as an intellectual and social equal. These people are not going to actually check her credentials, and they don’t care about her detailed understanding of medicine nor her technical competence to treat patients. What they do care about, can’t be taught by a MOOC nor measured by a test – or perhaps it can, but I don’t think anyone has even started down that path and I don’t think it is going to be easy.

            Traditional colleges and universities are pretty good environments to learn this skill, but you pick it up between the lectures and coursework, not during it.

          • 27chaos says:

            You’re saying that you expect middle management to be smart enough to recognize talent when they see it, rather than rely on traditional bureaucratic measurements. I am hopeful, but skeptical.

        • suntzuanime says:

          Wait hang on I thought of a better question. If there are limited spots in the education system such that there is more demand for education than supply, WHY IS THE GOVERNMENT SUBSIDIZING DEMAND?

        • BBA says:

          According to Wikipedia, Pakistan has added about 30 medical schools in the last five years. That’s about 1/3 of the schools that are currently open.

        • Scott Alexander says:

          Ah, good point. I’m stuck in my USA mode of thinking, where we can’t expand the medical education pipeline for complicated terrible reasons.

          • brad says:

            Isn’t the existence and expansion of the Caribbean medical school phenomenon somewhat to the contrary? It looks to me like if you are willing to pay for them, all of a sudden residency slots don’t look so static.

            Apologies if there’s a whole long post on this somewhere in the archives that I haven’t read.

          • Steven says:

            “Isn’t the existence and expansion of the Caribbean medical school phenomenon somewhat to the contrary? It looks to me like if you are willing to pay for them, all of a sudden residency slots don’t look so static.”

            Med school slots have expanded dramatically.
            Residency slots have not.
            The bottleneck is the latter, not the former.

          • brad says:

            Again as I understand it, these Caribbean medical schools have started to pay hospitals to take their students as residents (with the money ultimately coming from the students of course). That in turn has lead to the phenomenon of non-Medicare financed residencies.

            Traditional medical schools and the medical establishment generally have (implicitly) claimed that Medicare is the only possible way to finance residencies and if there’s a shortage of residency slots, there’s nothing anyone other than Congress can do about it.

      • lmm says:

        Sure, but in that case at least rich men are paying rather than the government. That sounds like a win.

      • Deiseach says:

        If a medical degree will get your daughter married to a higher status man than your family could attract otherwise, the families will pay for their daughters to go through med school. So applicants from better-off families will take up places, but not necessarily go into practice, and you are still left with a shortage of working doctors plus smart but poor students being filtered out.

        Maybe the solution is to make “doctor” a low-status profession? 🙂

    • Pku says:

      The loan thing doesn’t seem to work that well in america though… I’d suggest the idea of subsidizing on the condition that you stick around as a doctor (or equivalently, giving student loans you don’t have to pay if you stay on the job), but that might just cause a lot of people who quit medicine to go bankrupt (OTOH, since they have the option of staying in medicine unless they can afford to leave, most of them could probably avoid that.) You would be screwing over people who got through med school and then didn’t get a job/got fired, or med school dropouts (you could excuse those loans too, but only if med school dropouts don’t get status).

      (Also, insert the obvious terrible joke about smashing pots to get the rupees to pay for med school. I know it’s bad, but I’m not sure I could live with myself if I didn’t make it.)

      • brad says:

        How are medical school loans not working out in the U.S.? I’d say of all the student loan programs, that one is doing the best.

        Also, at the least the U.S., the number of involuntarily unemployed doctors is vanishingly small.

    • cewr says:

      Are many of these female MDs going overseas?

    • Who in Pakistan is paying inflated doctors’ salaries?

    • speedwell says:

      How many fewer practicing doctors does Pakistan need, given the number of doctors in “home-based private practice”? I’m not joking. I once proposed making basic health classes in public schools include everything from identifying the most common pediatric diseases to understanding the side effects and contraindications of the most common over-the-counter medications. I had a doctor once tell me he was impressed by how self-educated I was and how it made me so easy to treat (“Even though I disagree with you a lot, Doctor?” “Because you speak up instead of letting me make incorrect assumptions.” Best doctor ever). Giving housewives a full medical education obviously isn’t ideal, but I suspect that it’s more ideal than having nobody in the neighborhood competent to say, “That’s not a normal thing happening to your lady parts and I think you should overcome your embarrassment at seeing a male doctor about it.”

    • Ryan Beren says:

      Similar solution:

      Charge them a bundle in medical school fees, kept at zero interest for a decade or so, then for each year of full time doctoring, cancel 10% of the original debt. This way, the government subsidizes becoming-and-being-a-doctor instead of just becoming-a-doctor. Also, it still lets women who want the degree for marriage purposes get the degree. They just have to pay for it.

    • Eli says:

      If they have a shortage of doctors, and they put less money towards training doctors, how does that result in more practicing doctors?

      • suntzuanime says:

        They don’t have a shortage of doctors, they have a shortage of practicing doctors, because their doctors become housewives instead of practicing. Did you read the article?

        • Eli says:

          So the easiest thing isn’t, “Make medical education more expensive”, which just reduces the supply 10 years down the line, it’s, “Pay enough to bring nonpracticing doctors into the market.”

          • Nita says:

            They’re staying at home not on economical grounds (salary too low), but due to deontological considerations (you must obey your parents / husband, motherhood is your primary duty as a woman).

    • Considering the historical and current opposition to education for women, I’m kind of pleased to find out there’s a place where having an educated wife is a status symbol, even if this isn’t optimal for a bunch of reasons. It’s better than a lot of the alternatives.

  8. Dan Simon says:

    Okay, I’m going to really try to understand the whole AI risk thing. Please tell me if I’m close:

    One day, we will be able to create machines so intelligent that they will be billions of times more intelligent than humans, in the sense that they will be able to find solutions to questions so deep, subtle and complex that humans will in all probability never be able to solve them without help from such machines. Obviously we will find such machines extremely useful, build lots of them, and give them all sorts of problems to solve.

    Then one day we will give one of these machines a very important problem to solve–averting environmental catastrophe, say–and the hyperintelligent machine will notice in passing that the optimal solution to the problem involves exterminating humanity. So the machine will set about exterminating humanity as a subgoal of its assigned goal. Because the machine is so hyperintelligent, it will easily find an efficient way to achieve this subgoal, and destroy humanity.

    In other words, the fear is that we will one day create a machine so unfathomably intelligent that it will be able to solve just about any problem we give it, no matter how deep and complex; so incredibly dedicated to solving the problems we give it that it will be willing to go so far as to destroy all of humanity in pursuit of a solution; so complete in its understanding of humans that it will be able to circumvent any efforts that we make to divert it from fulfilling its mission of destroying us all; yet so staggeringly stupid that it will simply never occur to the machine that the problem we’ve given it implicitly includes the requirement that we not destroy humanity in the process–something that a normal ten-year-old child would be able to infer with no difficulty whatsoever.

    Or to put it another way, the fear is that the hyperintelligent machines we build will come to resemble grotesquely exaggerated versions of the autism-spectrum-hopping computer nerds who fear them: preternaturally brilliant at abstract problem-solving, but sorely lacking in empathy and notoriously incompetent at reading basic human social cues.

    Am I close?

    • Pku says:

      Yes, except you make it sound ridiculous when it could be entirely reasonable. Because while a superintelligent machine would probably be aware the solution it suggested wasn’t something we’d like, it wouldn’t need to care. (This is a pretty good explenation: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html )
      (Also, “autism-spectrum-hopping computer nerds”? I’m torn between getting into an explanation of the problems with that statement and pointing out just how much more socially inastute than them you would have to be to use the term here in the first place).

      • Dan Simon says:

        1) I actually think it’s ridiculous at a far more fundamental level than I’m describing here (I can’t seem to post links here, so search for “garis” on icouldbewrong dot blogspot dot com for more details).

        2) If these machines “wouldn’t need to care” what we want from them, then why would we build them in the first place? The transition from “useful hyperintelligent machine dedicated to solving our problems” to “humanity-destroying behemoth” needs *some* explanation, after all–doesn’t it? (I assume that’s why Scott spends so much time walking through it in his posts on the subject, for example.)

        3) There are certainly those who’d consider the phrase “autism spectrum-hopping computer nerds” perfectly applicable to me, so please don’t assume I meant it as an insult. My point was more that most people have difficulty separating the concepts “intelligent” and “human”, and therefore tend to think of a hyperintelligent machine as a kind of grotesque exaggeration of themselves, rather than carefully thinking through what concepts like “intelligent” and “hyperintelligent” might actually mean. (See the above link for more details.)

        • Pku says:

          1) I found the link, but I don’t have time to read it right now. I will comment that I do agree that “AI risk is dangerous and we can do something about it” to be low-probability enough that calculating shows that it’s not the most effective use of donated money (at least, that was the suggestion of the calculator Scott uploaded a while ago, even without me having to say “the probability option doesn’t go low enough”). I’ll try getting to reading it later.

          2) There’s the sort of things that have been mentioned here before (which seem like believable risks to me, if not as certain as some people here think). The best description I’ve heard was this: I worry about what the computer might consider “acceptable costs” – think of what your least favourite politician would consider “acceptable costs”, and then consider that an AI might be much more different from you than a human on the other side of the political spectrum, even if it was more-or-less well written.

          3) First of all, I should apologize. I think I overreacted here.
          I’m not sure people here think about AI as grotesque extensions of themselves here though – it seems more like grotesque extensions of normal computer programs, which do what you tell them instead of what you want (This might be far from what they’ll actually be like, but it may be closer than expecting AI to act like an improved human).
          Another option that I haven’t seen brought up here is the idea of a mid-level intelligent AI – even if a hyperintelligent AI could always get exactly what we mean, there might be a dangerous middle where it’s intelligent enough to do massive damage, and not quite intelligent enough to realize why that would be wrong (the “autism spectrum-hopping computer nerd” description might apply here).

          • Dan Simon says:

            2) Sure, but we’re not talking about subtle judgments regarding trade-offs and priorities–we’re talking about whether a computer thinks it’s okay to wipe out humanity or not. A computer that can’t figure out its instructions implicitly rule that out isn’t hyperintelligent in any useful sense of the term. And a computer that knows but ignores our wishes is surely going to ignore our expressed wishes in other respects as well. Why would we ever want to build such a machine in the first place? What use is a machine that doesn’t necessarily even do what we tell it to do?

            3) You may be right about this–certainly others have brought up the common science-fiction vision of an intelligent computer as a sort of hybrid of a human with a computer program. As I explained in my blog post, I consider that hybrid vision to be completely incoherent and confused about the nature of both humans and computer programs.

          • FacelessCraven says:

            @Dan Simon – Define “Wipe”, “Out”, and “Humanity”.

            The question is how do we make the AI want to make the sort of future that we want to live in. This is hard because it’s pretty clear that we, as a species, have no idea what we want pretty much until the moment we get it, much less know a way to explain “human happiness, flourishing, and satisfaction” as a mathematical formula.

          • TheAncientGeek says:

            No an SAI doesn’t need to want to make a better world in order to be safe, and it doesn’t need to want to make the world a better place to make the world a better place. A drive to answer questions correctly were would do.

      • anodognosic says:

        >The transition from “useful hyperintelligent machine dedicated to solving our problems” to “humanity-destroying behemoth” needs *some* explanation, after all–doesn’t it?

        The explanation is that “useful hyperintelligent machine dedicated to solving our problems” is very, very hard to define with the strictness necessary to stop it from becoming a humanity-destroying behemoth. It would have to essentially be able to predict human preferences in unforeseen situations, which means that we need to figure out the basic machinery that outputs our preferences. Anything else probably gives us a humanity-destroying behemoth.

        (This might also give us a humanity-destroying behemoth.)

        • Dan Simon says:

          Huh? How hard is it to weave an implicit, “…and don’t destroy humanity” clause into every problem definition? Even we stupid humans are capable of that…

          • Erik says:

            We stupid humans have spent a lot of time and effort on language. Most of this time and effort we aren’t even aware of the results of until we see computer language processors get it hilariously wrong by comparison. Even then we have edge cases that require significant amounts of thought; also there are outcomes which don’t quite “destroy” humanity that we still don’t want, like, say, rendering the entirety of Eurasia inhabitable.

            “Don’t destroy humanity, and don’t render Eurasia inhabitable, and don’t sterilize everyone in the Western hemisphere, and don’t release flesh-eating bacteria that flay the skin off everyone in Africa, and don’t solve heritable diseases by sterilizing 99% of people and breeding from the rest, and don’t distim the doshes, and don’t, and don’t, and don’t….” is a very long list of implicit clauses that it’s utterly impractical to specify in full. Moreover, it’s probably okay to render a parking-lot-sized area inhabitable. And sometimes a person might have a rare skin disease that is best treated with flesh-eating bacteria. So if the machine is powerful/dangerous/resourceful enough that doing any of these things is a risk, it needs not just a single natural language clause, but a large human value function built into it.

          • jaimeastorga2000 says:

            How hard is it to weave an implicit, “…and don’t destroy humanity” clause into every problem definition?

            Much, much harder than you think it is. You can’t just tell the machine “and don’t destroy humanity” in English; you have to actually write out what “and don’t destroy humanity” means in precise, mathematical language. And when try to, you find that it’s a statement that carries a lot more complexity and subtlety than you would expect. See Eliezer Yudkowsky’s “The Hidden Complexity of Wishes”.

          • Dan Simon says:

            Sure, understanding natural human language is really difficult to program–but, then, so’s the type of intelligence sufficient to figure out how to destroy humanity, then execute on the plan, circumventing all safeguards put into place by humans. If we’re going to hand-wave away the implausibility of the latter by saying, “of course it can figure out how to do that–it’s superintelligent!”, then we forfeit the right to hand-wave away the former by saying, “…but understanding what humans mean when they express themselves in natural language is really, really hard!”.

          • drethelin says:

            Dan the problem isn’t whether the computer understands but whether we manage to make care. Have you ever had kids? Or taught a class? Have you ever had to command anyone? There are multiple ways to fuck up orders you are given. Some of them come from not understanding, but some come from not wanting to fulfill them. And no amount of intelligence will avoid that! If a child doesn’t want to clean his room, but instead to make a mess, it doesn’t matter if they are smart enough to understand your command to do the opposite. The motivation problem is the issue. And we don’t in fact know how to make computers that have the motivations we want them to.

          • Dan Simon says:

            When and how does it stop “caring” what we tell it to do? Why would that only start with superintelligence, and not long before? And what does it even mean to attribute a subjective quality like “caring” to a computer, whether hyperintelligent or not?

            There are some very serious underlying problems with this whole chain of reasoning. Please read my blog post–search for “garis” on icouldbewrong dot blogspot dot com–for more on this issue.

          • Jiro says:

            If a child doesn’t want to clean his room, but instead to make a mess, it doesn’t matter if they are smart enough to understand your command to do the opposite.

            However, this is *not* what MIRI-style proponents of AI risk are saying. They are not claiming that the AI will hurt humanity because it is doesn’t want to obey orders; they are claiming that the AI will hurt humanity because it is doing its best to obey orders, but we can’t give it a specific enough order.

            The analogy is not a child who wants to make his room messy, but a child who is told to clean his room, but he doesn’t understand the word “clean”, so you told him to remove all the toys. He then covers the floor with candy wrappers in an honest attempt to obey your order that there are to be no toys on the floor.

          • Aris Katsaris says:

            Do please explain how *you* define humanity, let alone how you would be able to specify that definition enough to be parsable by a computer program.

            Are you defining humanity by its DNA? Then your superintelligent machine wouldn’t be able to accept humanity making adjustment to its DNA, let alone replace the whole mechanism of DNA with something else?

            Are you defining humanity by the mental range the human species currently occupies? If so would you declare “inhuman” (and as such acceptable extermination targets) anyone who falls (or will fall in the future) beyond the range of current human mental processes, e.g. is smarter or has been made to think as easily in four dimensions as we do in three?

            Do please explain how *you* define humanity, in normal human language, before even attempting to claim how you’ll explain that concept to a machine. Because I personally don’t know how to define humanity for my own self let alone be capable of communicating the concept to another human (let alone another non-human mind).

          • Jeff H says:

            drethelin, isn’t that largely because we don’t know how to make computers with anything recognizable as “motivations” at all? A lot of awfully smart people are far from convinced that’s even possible. If we can’t accomplish this, the problem of accomplishing it in a particular given way is moot. Even if we can, I don’t see how we can start on that problem without more of an idea than presently exists about what such a machine would actually be like.

          • roystgnr says:

            How hard is it to write extremely complicated software from hand-wavy specifications without any bugs? It’s difficult to be sure, since none of the millions of people trying to do so have ever succeeded, but if someone ever does we should definitely ask them.

          • TheAncientGeek says:

            @Jaime

            You dont have to mathematically encode the whole of human value. It may not even br useful…how do you graft it onto a.neural net?

        • drethelin says:

          Jiro: that is a separate problem that also exists, but considering people seem perfectly comfortable spinning up enormous neural nets just to SEE WHAT THEY DO, the issue of figuring out how to program a computer that even WANTS to help us is also important. There is more than one step along the path to safety: Motivation is one, comprehension of others (us) is another, and so on.

          • Vitor says:

            Why exactly shouldn’t we feel perfectly comfortable spinning up huge neural nets just to see what they do? It’s not like neural nets posess agency and purpose, and “what they do” is limited to writing a couple of files to your filesystem.

          • John Schilling says:

            There is the small problem that we’ve built almost our entire information infrastructure around the idea that “file” should not be clearly distinguishable from “program” and “your filesystem” should not be clearly distinguishable from “the internet, to which everything we care about is increasingly connected”.

            I’m not terribly concerned about an unfriendly AI being infinitely capable of escaping a properly-constructed box. There is, nonetheless, some degree of risk in telling a piece of software that you don’t fully understand, essentially, “Hey, write some stuff that might or might not be programs, and put them somewhere that might or might not be on the internet, but you understand that we don’t want you to reprogram the entire internet even though we deliberately fuzzified the definitions of ‘program’ and ‘internet’, right?”

            Chemists who still adhere to the “let’s mix these interesting things together and see what happens” model of chemistry, usually do so under properly ventilated and scrubbed fume hoods. I rather suspect that a lot of AI research is going to be done without equivalent safeguards, and if I believed hard takeoff was at all likely, I’d be somewhat frightened by that prospect. As is, it’s going to be a damned nuisance.

          • Vitor says:

            You’re wrong on the details. First, files and programs are not even remotely the same thing, we have in fact based all our infrastructure around the exact opposite concept for the last 50 years. Users are sandboxed by default, even if you don’t really expect them to do anything bad. Gmail is not ever going to execute your email attachments by accident. Secondly, neural networks may have the word neural in their name, but they’re not going to spontaneously gain consciousness anytime soon. Even if they did, their only way of interacting with the world is writing code somewhere, and hope that you execute it by accident.

            That said, I am honestly trying to understand the spirit of the argument. I do see the way problems can eventually sneak into a system too complex to be understood as one piece, precisely because everyone assumes that somewhere there must be a safeguard, and do the most convenient thing for them, locally. E.g. in browsers, javascript has started to sneak in to perform functions that shoud be provided by the browser, like rendering content. For convenience we start allowing more and more code execution where it doesn’t belong, even though nobody took the conscious decision that we wanted that, and suddenly we have a very vulnerable system on our hands, easy for an AI to break out of (because the researcher of course put his code in the cloud, again for convenience).

            Is that more or less your line of argument?

        • TheAncientGeek says:

          Its hard to solve a particular version of the FAI problem, the singleton God AI.But you can aim lower.

    • Rangi says:

      Have you ever programmed a computer before? “Unfathomably intelligent, yet staggeringly stupid” describes them quite well. Scott just linked to a great example of their intelligence: combining the subject matter of a photo with the style of a painting. As for them being “staggeringly stupid,” you’re saying the machine should realize what we implicitly want, but that’s just not how computers work. Sometimes new programmers expect the compiler to read their comments, or to rearrange statements in the “obvious” order, but all the computer will do is what you tell it to do—and it will do that with surprising speed and cunning. (I think “cunning” is a better descriptor than “intelligent,” since it implies skill at accomplishing a goal without wisdom to understand the assumptions that a human would make.)

      • Dan Simon says:

        Yes, I know that computers *today* work that way–but then, nobody would ever accuse them of being “hyperintelligent”, would they? If they’re “hyperintelligent”, then surely they could pass, say, a Turing test–right? And wouldn’t being able to pass a Turing test imply a certain basic ability to infer the kinds of implicit-yet-obvious meanings in human statements that other humans decipher all the time–like, say, “a ‘solution’ to a human problem that involves destroying humanity isn’t really a solution”?

        Again, I refer you to the blog post pointed to above for further discussion of this and other issues surrounding the concept of “hyperintelligence”.

        • Aris Katsaris says:

          “If they’re “hyperintelligent”, then surely they could pass, say, a Turing test–right? ”

          Can you pretend to be a wolf so well that a wolf would think you to be a wolf?

          Can you pretend to be a bee so well that a bee would think you to be a bee?

          Are you more intelligent than a bee or a wolf?

          I believe the Turing Test to be a quite uninteresting test. Something can be more intelligent than you, intelligent enough to be able to wipe you out if it so wills, and still not be capable to pretending to be you.

          > And wouldn’t being able to pass a Turing test imply a certain basic ability to infer the kinds of implicit-yet-obvious meanings in human statements that other humans decipher all the time–like, say, “a ‘solution’ to a human problem that involves destroying humanity isn’t really a solution”?

          At some point of intelligence it would indeed be capable of understanding how you would have wished to have programmed it if you had been as intelligent as it is. It would be capable of understanding what you *meant* to have done. It would not however care about that. It would be as you programmed it, not as you wish to have programmed it, unless you’re somehow capable from including this meta-aspect in your programming (‘change yourself to be as I’d wish to have made you).

          In short we’re fucked if we don’t make its self-modifications make it move into alignment towards what we’d have wished to have made without knowing in advance what we’d have wished made.

          How do you program that?

          • Vitor says:

            > Can you pretend to be a wolf so well that a wolf would think you to be a wolf?

            Ok, I’ll bite. First of all, a Turing test abstracts away any biological / structural differences that would make us immediately recognize something as not human. A Wolf-Human Turing test would need to achieve the same thing, reducing communication to it’s barest essence, such that if a wolf ends up convinced I’m not a wolf, is becaue it noticed a certain non-wolfishness in my thought processes. If a wolf identifies me as a human by my gross smell and pink skin, that wouldn’t count. Same with a clunky remote controlled wolf-robot, etc.

            Therefore, I believe a wolf-human turing test is not possible to set up fairly. A minimum capability of abstract reasoning in the tester is required for the concept to even make sense. If we assume that we could somehow overcome this obstacle (by magic, for the sake of thought experiment), then I would dare speculate that I might be capable of fooling a wolf, given a couple of years to thorougly study lupine behaviour.

          • nonanon says:

            Bees and wolves have both been domesticated, so we are clearly capable of pretending to be bee-enough for bees to willingly produce honey for us, or wolf-enough for dogs to want to protect our homes.

        • moridinamael says:

          This is such a weird conversation. You *are* an AI-Risk proponent. You keep saying over and over again, “Look, obviously we need to make the AI able to understand human desires, what kind of lunatic would create a super intelligence that doesn’t understand human desires?” And answer is that there are more than enough lunatics willing to do that, especially if it turns out that making a very good problem solving machine is easier than making a very good problem solving machine that checks assiduously that it has understood its user’s intent.

          • Dan Simon says:

            Well, if you’re talking about lunatics making exceedingly dangerous, not-very-useful machines, that’s a risk we already face today, and will for the foreseeable future. It also has nothing to do with intelligence, let alone “superintelligence”–the “doomsday machine” from Dr. Strangelove, for example, will suffice.

          • Saint_Fiasco says:

            That’s true. It’s why we have lots of regulatory bodies that try to prevent lunatics from acquiring materials and technology that can be used to make weapons of mass destruction.

            Something similar might be necessary for AI. Perhaps not to the same extent, because it’s a less likely threat than the WMDs that already exist, but certainly more oversight than zero.

          • moridinamael says:

            > Well, if you’re talking about lunatics making exceedingly dangerous, not-very-useful machines, that’s a risk we already face today, and will for the foreseeable future.

            Y…yes. Exactly.

            “So what if AI is dangerous. Nuclear weapons are dangerous.” That’s not really a rebuttal.

          • Dan Simon says:

            I disagree.

            1) I consider AI risk highly implausible and quite possibly nonsensical, for reasons I’ve explained elsewhere.

            2) The other risks are very immediate, very likely, and very urgent, unlike AI risk.

            3) Addressing the other risks stands a good chance of helping or even solving the AI risk problem in the highly unlikely event that AI risk is actually realizable.

        • You’re not entirely wrong, but I think this is exactly MIRI is trying to point us to. Your point that a superintelligent AI should be able to figure out what we want is why Yudkowski is a fan of Coherent Extrapolated Volition

          http://wiki.lesswrong.com/wiki/Coherent_Extrapolated_Volition

          The question is how we get from here to there.

      • brad says:

        I have programmed computers for the last 15 years. No one has any idea how to make artificial intelligence — friendly or unfriendly. None whatsoever. We have Watson and Deep Blue and Google Maps, but we aren’t even in the ballpark of having a program that can be either Watson *or* Deep Blue *or* Google Maps depending on what you ask it, much less based on its own judgment about what would be good for humanity. I suppose maybe we’ll get there someday, forever is a long time, but right now we have no clue.

        So I’m not sure my having programmed for 15 years gives me any insight into what artificial intelligence would be like. Moreover, I suspect that no one is capable of usefully speculating on the subject. De Vinci conceived of the helicopter, but I don’t think it would have been fruitful for his contemporaries to work on helicopter seat belt design.

        • Soumynona says:

          We have Watson and Deep Blue and Google Maps, but we aren’t even in the ballpark of having a program that can be either Watson *or* Deep Blue *or* Google Maps depending on what you ask it, much less based on its own judgment about what would be good for humanity.

          To be fair, I don’t think we have a human who can be either a Jeopardy champion or a chess grandmaster or some kind of master cartographer depending on what the situation demands.

          • HeelBearCub says:

            @Soumynona:

            That’s missing the forest for the trees.

            All of the people can also maneuver an inherently unstable platform through terrain that is rendered and mapped from visual, auditory, tactile and balance sensors while engaging in mastication of a chemical releasing compound simultaneously.

            That’s right, they can walk and chew game at the same time. Along with something like a million other tasks.

            The forest is the G in AGI. We are generalists and that is something that is very far away, so as to be out of view of the current things that are in the AI sphere.

      • TheAncientGeek says:

        Conventional programmes have the problem of literalism, but an AGI could reasonably be assumed to have human level linguistic skills.

    • jaimeastorga2000 says:

      The genie knows, but doesn’t care.

      Also, you left the intelligence explosion aspect out of your model. So the idea is more that you make an AI of roughly human-level intelligence, give it a problem to solve, and then the AI notices that making or self-improving into a smarter/faster/better AI with the same goal is the optimal way to solve this problem. It keeps doing that until it runs against some limit, but by that point it is much smarter than a human being, and THEN it kills all humans as a subgoal of accomplishing its goal. Also, this happens pretty fast; in a matter of weeks at most, and perhaps even hours.

      • Dan Simon says:

        Again, if it doesn’t care what we want from the outset, then it never does anything of value for us, and we stop building it long before it can do any damage. And if it *does* care from the outset, then when/how does it stop caring?

        (I’m of course putting aside all the incredibly messy questions that arise when we start using words like “care” in this context. Is “caring” part of intelligence, by definition? Does that mean that “intelligence” is actually a purely subjective concept, like “caring”? There’s lots more at the blog post I mentioned above.)

        • meyerkev248 says:

          It makes the magic number go up. Or down. Or always equal 123,872.

          We’re sort of assuming that it’s still at heart a program. Humans have limiters. Programs do exactly what you tell them to do.

          The canonical one is the Paperclip AI. You tell the AI to make paperclips, and so it uses human bodies as base materials. Not because it hates humans, but because humans are made of atoms, and atoms can be made into paperclips.

          /Or these days, stock market bots.

          //Or http://teleread.com/ebooks/amazon-price-bots-result-in-unusually-high-and-low-priced-books/

          • Dan Simon says:

            So we’re not talking about intelligence at all, then–let alone “hyperintelligence”–right? After all, a computer capable of matching a ten-year-old on a Turing test would know not to interpret a command to make paper clips as permitting killing humans to provide raw materials. Rather, we’re now talking about “dumb” automatons killing people because they don’t know any better.

            That threat is certainly real, and research on building safety into robots and other forms of automation makes a lot of sense, even today. But it’s got nothing to do with intelligence, let alone “hyperintelligence”, whatever that means.

          • moridinamael says:

            So your solution is to make sure the AI interprets commands as a ten-year-old would interpret commands?

            “No of course not moridinamael.”

            Okay, so your solution is to make sure the AI interprets commands a representative intelligent human would interpret commands?

            “That’s closer,” you say.

            Okay, what if the scope of the problem that AI is solving happens to exceed the intellectual grasp of the example-human that it has modeled? – which it will, because we’re not going to be giving crossword puzzles to superintelligences.

            “Okay, maybe it should do what an idealized human would want, a human who thought faster, was smarter and wiser, in the direction that humans would have wish themselves to be extrapolated.”

            Welcome to the AI risk movement.

          • Jiro says:

            Okay, what if the scope of the problem that AI is solving happens to exceed the intellectual grasp of the example-human that it has modeled?

            If the AI can’t figure out something that no human can figure out, or if the AI comes up with a conclusion that you personally find unacceptable but which is considered acceptable y a substantial group of humans, then the AI hasn’t really failed. It’s just done as badly as humans have–and we put humans in charge of things all the time.

          • Saint_Fiasco says:

            It’s just done as badly as humans have–and we put humans in charge of things all the time.

            Imagine an AI with goals as stupid and crazy as Hitler, but much smarter in its execution. That sounds really dangerous.

            Stupid humans with power over nuclear weapons are already very risky, which is part of why we put all those check and balances in government.

          • Desertopa says:

            It would know that the command to make paper clips wasn’t *intended* to be interpreted as including making humans into paper clips, but unless it’s specifically programmed to care, it won’t.

            Humans regularly have sex using contraception, even when they know that we evolved sex drives in order to promote reproduction, because they don’t feel obligated to align their impulses with the pressures that gave rise to them. We don’t remain true to the “intention” of our programming, or, for the most part, feel any inclination that we ought to. Why should we? It’s not programmed into us. The fact that evolution does not literally have “intentions,” and humans do, isn’t relevant in this context, since knowing that humans have intentions does not give any incentive for a computer to actually respect them.

            A hyperintelligent computer should be better at divining our actual intentions than another human would be, but it would also be better at working out clever hacks which allow it to more completely fulfill the actual mandates of its programming than following the spirit of our instructions would, so we have to be really careful that following the mandates of its programming strictly entails following the spirit of our wishes, and not just the letter.

          • TheAncientGeek says:

            “If the AI can’t figure out something that no human can figure out, or if the AI comes up with a conclusion that you personally find unacceptable but which is considered acceptable y a substantial group of humans, then the AI hasn’t really failed. It’s just done as badly as humans have–and we put humans in charge of things all the time”

            An unstated premise is that the AI executes its decisions directly, rather than making suggestions in a committee.

          • Jiro says:

            Imagine an AI with goals as stupid and crazy as Hitler, but much smarter in its execution.

            That would contradict the stipulation “make sure the AI interprets commands [as] a representative intelligent human would interpret commands.” A human who interprets commands in such a way that they achieve Hitler-like goals would not be representative.

        • Again, if it doesn’t care what we want from the outset, then it never does anything of value for us, and we stop building it long before it can do any damage.

          Wrong. Deep Blue does not give a damn what we want, yet it is very good at calculating chess moves for us. The computer I’m sitting in front of does not care about anything whatever, but it is doing me a valuable service by letting me tell you how wrong you are.

          “The defining feature of a problem suitable for a computer is that we don’t know the answer, but we know something about the answer”; that is, we don’t know the chess move Deep Blue will give us, but we know that it will be a good one. This is entirely independent of whether Deep Blue cares about anything.

          • Dan Simon says:

            See my previous response to meyerkev248. Deep Blue isn’t “intelligent”, even by Turing test standards, let alone “superintelligent”. Deep Blue also already exists today, as do many other machines that outperform humans at specific tasks (tracking and shooting at moving targets, for instance) without being “intelligent” in the Turing test sense. Improving the safety of such machines is certainly a worthwhile investigation topic, but it has nothing to do with “superintelligence”.

          • Anonymous says:

            Deep Blue does not give a damn what we want

            That’s strange, considering all of the work that went into constructing models specifically used for evaluating the “value” of a position in question. Seems like that is a pretty strong way of telling Deep Blue precisely what we want.

          • Daniel Speyer says:

            Deep Blue “wants” to win the chess match in front of it. That is a “desire” we gave it. If you told it that winning the current match will result in nuclear war killing billions of humans (imagine this made sense) it would still try to win the match.

          • AJD says:

            It’s very sad to see
            the ancient and distinguished game that used to be
            a model of decorum and tranquility
            become like any other sport:
            a battleground for rival ideologies
            to slug it out with glee.

        • Soumynona says:

          Again, if it doesn’t care what we want from the outset, then it never does anything of value for us, and we stop building it long before it can do any damage.

          How do you tell whether it cares before you’ve finished building it, run it and seen what happens? You don’t seem to realize that programming is hard. We constantly fail at predicting what programs will do. Programmers typically expect their programs not to crash, not to loose data, not to allow access to unauthorized users. Yet those things happen all the time.

          And not crashing, not loosing data, not allowing unauthorized access are well specified problems. The programmer understands perfectly well what properties she wants her program to have but she still fails to notice that it doesn’t have those properties before it’s too late and bugs are uncovered in released code.

          Many problems we try to solve aren’t that well specified. “The program should aid the work of our accounting department and reduce costs.” Turning that kind of thing into an unambiguous specification of functionality that you can hand off to programmers who don’t know anything about accounting themselves is also very hard. We might be even worse at this than at writing programs that fit specifications. Ill-defined requirements are a major cause of software project failure.

          So hey, all it takes for AI to be a non-issue is to do those two things we’re really bad at (formalizing requirements and writing correct code) perfectly when faced with problems much harder than those we’ve ever faced in the past and with the additional possibility that we might only get one try at solving them. Easy peasy.

          • TheAncientGeek says:

            Agility is a well known solution to requirement definition problems. Why insist on big ddesign uofront?

        • fhyve says:

          Economics. I want an AI to play the stock market. I’m going to design it with state-of-the-art AI technology, give it absurd amounts of computing power, and give it the internet to learn from and make better predictions. Or maybe I want an AI to protect America from its enemies and I will do all the same things with regards to power and resources.

          At some point we are going to hit a threshold where AIs are powerful enough to dominate the environment of human civilization, just like they are now powerful enough to dominate the environment of some video games. Look up the youtube video with this title: “Deepmind artificial intelligence @ FDOT14”.

    • Nyx says:

      I like Bill Gate’s intuition for why A(G)I is a huge thing:
      https://www.youtube.com/watch?v=TRpjhIhpuiU&t=19m06s

      Basically, we have some very general purpose learning algorithm in our brains, and it is not magic (e.g. not souls). Our brains are: low on memory, low on processing or communication speed, low on iteration time (need make baby (or many) to debug, rather than just make a copy). Improving any one of these, which computers could do several orders of magnitude, improves the whole process. Improving many of them improves the whole process (of learning) much, much more. Moreover, there are good reasons to believe the algorithm is very inefficiently implemented, as evolution did it. So potentially very large gains to be made here (in software), ignoring those hardware speedups entirely.

      You quickly see how we could get something astronomically smarter than humans, and that’s being conservative with where we could make optimizations. Of course, we don’t yet know this awesome algorithm, but it’s in there somewhere.

      • Dan Simon says:

        I would certainly not be the least bit surprised if Bill Gates saw himself as little more than a highly evolved learning algorithm held back by its processor speed and memory capacity. I also think that says a great deal more about Bill Gates’ capacity for self-understanding than it does about the likelihood of superintelligent AI destroying humankind.

        • Soumynona says:

          You seem to be dissing the computational theory of mind here or maybe even physicalism in general (apart from dissing Bill Gates, which you’re definitely doing). Am I reading you right?

          • Dan Simon says:

            I’m happy to concede that the brain can be considered an information-processing device. The idea that it can be reasonably characterized as primarily running a generalized learning algorithm is simply preposterous. Learning, in the context of the human brain, is a very specific process carefully tailored by evolution to match human survival needs, and isn’t very generalized at all. Not only is our understanding of that process currently very poor, but we really have no idea even what it might mean to scale or generalize it, let alone whether it’s possible, what the result might be or how, if at all, we might find such a result useful.

          • Nyx says:

            You’re disagreeing with mainstream neuroscience here. Soumynona hit it right on the head, you’re disagreeing with computational theory of the mind and physicalism in general. These are the consensus positions in nearly every relevant field here (and science in general).

            I don’t think you’re really interested in considering arguments, as all your comments ooze condescension. I can see you sneering at your screen. You’re not really “trying to understand it” as you claimed originally. I’m done.

          • Dan Simon says:

            Calm down–nothing I’ve said contradicts consensus scientific views of anything. I don’t doubt that in theory a human brain could be simulated by a computer. But that’s a far cry from claiming the simulation would amount to a “generalized learning algorithm”. The human brain is simply an accumulation of millions of years of evolutionary hacks, and the result is neither generalized nor necessarily particularly focused on learning. It appears to have certain quite specific learning capabilities, along with the rest of its various ad hoc information processing gadgets, and it turns out (not too surprisingly) that those learning capabilities have all sorts of survival benefits. But it would be a gross distortion to describe the idiosyncratic combination of functions represented by the brain as a “generalized learning algorithm”.

            What would it mean to try to scale this chaotic collection of functions in specific ways–say, by speeding it up or adding more memory? We don’t actually know. Speeding it up would presumably make it do exactly what it does today, only faster. It’s unclear what the effect of this would be, but since many things it does are tied to real-time physical phenomena such as sensory inputs and motor outputs, one might well expect the effect to be quite disruptive. As for adding memory, it’s not clear what that even means, since human memory appears to be intrinsically tied to processing, and we have no idea what adding processing power might do–it could easily disrupt the functioning of the whole completely, or create a quantum leap in effective functioning, or perhaps do a little of each–or have no effect at all. And the same could be said of every adjustment we might attempt to make to correct for the disruptive effects of scaling.

            So no, the brain is not primarily a “generalized learning algorithm”, and we have no idea what effect trying to add memory to it or speed its processing might have on its overall functioning. And no, this doesn’t fly in the face of any consensus about the nature of the brain.

      • LTP says:

        Thinking of how the human mind works is an algorithm strikes me as a complete category error.

        • Nyx says:

          This isn’t an interesting criticism, you’re just disputing definitions (i.e. I have a different definition of algorithm than you).

          What I mean by algorithm is the process of information processing in the brain. Nobody in neuroscience disputes that the brain does some *process*[algorithm] of firing and connecting neurons that turns experience into knowledge. The brain isn’t magic, it’s just physics. And inefficient compared to idealized system (as vacuum tubes were an inefficient implementation of idealized boolean logic gates).

      • Anonymous says:

        low on processing or communication speed

        This is not true, one human brain still does more computation per second than the most powerful supercomputer.

        • Nyx says:

          Only remotely valid criticism here (though I would push back against using things such as supercomputers simulating brains as your metric, as there are obvious inefficiencies in the simulation and most people in neuroscience would be hesitant to make a direct comparison of computational power in the human brain to computational power in computers, as the brain is so poorly understood that it might not be meaningful).

          I didn’t address this because I think it’s clear that, once you admit that we are just brains running some information processing algorithm that turns experience into knowledge (which again, is not disputed by anyone in either neuroscience or computer science, but normal people have a hard time admitting this), it becomes obvious that there are gross inefficiencies in the process. Like that a neuron is many times larger, and its signals travel many times slower, than is necessary. If “larger/slower than is necessary” is too large an inferential gap for you, I recommend reading some basic neuroscience as well as physics, and you’ll see that there are clear hardware inefficiencies in the human brain that are possible to improve many orders of magnitude (I am ignoring the question of whether we can figure this out, just demonstrating it is possible to have a superintelligent machine) (& ignoring software, i.e. how the connections produce intelligence might be (very likely is) not as efficient as an idealized intelligence algorithm)

          • Deiseach says:

            I get very twitchy when I read things like “larger than necessary” when talking about neurons, because I think we’re liable to find out later- particularly after we’ve fucked about with embryos trying to create more ‘efficient’ neurons – that they’re just as damn large as they need to be.

            Remember the crap about “you only use 10% of your brain”? Or the popularity of “junk DNA” which then turned out not to be junk after all but rather serving several purposes?

            Humans might be more efficient if we had six limbs and two heads, but we don’t. We fit the niche we’re in with what we have, as does the rest of evolved nature. Creating six-limbed, two-headed machines may not work out so well, unless we know what niche they would fill.

          • Nyx says:

            Prior to considering anything specific, does it not make sense that evolution would not produce particularly optimal for intelligence (especially given all the constraints imposed by evolution vs the pure space of possible minds)? Particularly because we are pretty much the first creature with meaningful general intelligence. I think mainstream evolutionary biology is in agreement with this.

            I”d agree that neurons are probably pretty fucking efficient *for neurons* as they’ve been around quite some time. The process(es) which give people the type of intelligence we’re talking about are not so optimized, because they are much newer. Moreover, their constrained to only operate in an architecture of neurons vs whatever else might be better.

            I would say 4 functional hands would be obviously useful now, our constraints being different from when we were evolved. Likewise, efficiency and multipurpose use of neurons used to be useful. Now that we have this intelligence thing and the fruits that come with intelligence, efficiency and multipurposing are mostly irrelevant or even problematic (obesity).

            I believe the consensus position in neuroscience is something along the lines of “neurons connect to other neurons with dendrites, and when they reach a certain threshold, they fire a signal the potentially triggers other neurons. This is what somehow gives rise to intelligence (e.g. the connection pattern and/or the firing sequence)”. But there’s all this other stuff, expensive and space-consuming stuff (such as machinery to supply it’s own energy, DNA pertaining to other body areas, cell division machinery that’s not useful in neurons (even tho neurons don’t divide), all that stuff cells do that’s not related to receiving and firing signals. Moreover, the signal speed is constrained by what’s on hand, aka very slow (something like 250mph).

            If the consensus position (that the connection pattern and firing signals) is correct, then does it not seem likely that we could outsource all the unnecessary cell functions, speed up the signals propagation, etc. such that we get something that does the same functionality but much, much better (or at least much, much faster)?

            Maybe I’m being too verbose, but basically: there is some idealized, abstracted idea of a “unit”/neuron (akin to boolean logic gates in computer science) that interacts with other units according to some specified rules. And when arranged correctly this gives rise to intelligence *somehow*. Do you think neurons are the most efficient possible implementation of this platonic ideal of a neuron?

            I think I gave a clearer response below, see:
            https://slatestarcodex.com/2015/08/30/links-815-linkering-doubts/#comment-232691

          • Anonymous says:

            Like that a neuron is many times larger, and its signals travel many times slower, than is necessary

            considering that we do not know how cognition actually works I would hesitate to be so curt.
            Even if there are inefficiencies I’m not so sure we will be able to improve on it on a short timespan. We know exactly how the human hand works, but we have no replacement of equivalent functionality.

          • Nita says:

            @ Deiseach

            “junk DNA” which then turned out not to be junk after all but rather serving several purposes

            A stack of junk papers accumulates on my desk. Then I put my tea mug on top of it. Now, if you yanked the papers out, my tea would spill all over the desk, and that would be bad. But that doesn’t mean the content of the papers is useful.

          • TheAncientGeek says:

            Neurons are pretty damn energy efficient. You
            have to consider what evolution was optimising for.

          • Ever An Anon says:

            @Nita,

            Trust me on this one, the regulatory elements in non-coding DNA are ridiculously important. Not to get too into detail, but one of the ones I’m working with right now is important enough to give you lethal seizures if you’re missing it.

            Junk DNA was a concept that made sense under assumptions that are no longer valid. Even if it’s just a seemingly random nucleotide string in some places it’s very important to keep the three-dimensional chromatin structure in a good shape and that means proper spacing of histones.

          • Nita says:

            @ Ever An Anon

            Oh, I believe you. I wouldn’t want to spill tea all over my desk in a misguided attempt at cleanup, either. Worse yet, sometimes I write important notes on the junk paper, making the whole mess even more difficult to untangle.

            But is there a way to do things differently — to rest my tea mug and take notes without making that heap of paper indispensable? I think there is, but it involves thinking ahead — a trick that I don’t always use and evolution doesn’t know.

          • Ever An Anon says:

            Ok, I think my explanation wasn’t super clear.

            In bacteria or other single celled organisms there is very little non-coding DNA. More complex multicellular organisms have an enormous amount of the stuff. And the big difference between the two, complexity, is the ability to produce many cell lines from the same genome in a coordinated fashion. That ability requires structures in the non-coding DNA.

            Obviously this was not purposeful in the “intelligent design” sense, but calling it accumulated junk makes as much sense as saying that about the grey matter in your brain.

    • Ghatanathoah says:

      Let me try another angle, since you don’t seem to be getting other people’s explanations.

      You are essentially imagining an AI that is programmed to follow any orders that it is given, and then misunderstands those orders because it is stupid or unempathetic. That’s not the AI risk people are describing. The problem isn’t getting the AI to understand what the orders mean. The problem is programming the AI to follow any orders that it is given.

      If we successfully manage to program an AI to follow any order it is given, you are right that there is no danger. The AI will be smart enough to infer the spirit of our orders, not just the letter. The problem is that programming an AI to follow any order it is given is really hard. The reason AI risk seems ridiculous to you is that in all the scenarios you imagine the hard part has already been solved. You need to go a step back further in the development process, and imagine programming an AI before the “program it to follow orders” problem has been solved.

      Imagine programming an AI to solve any problems you set before it. What does it mean to “solve” a problem? What does it mean to follow the “spirit of your orders, not the letter?” What is an order? What is a problem? What does it mean to solve something? You can’t expect the AI to figure all these things out on its own, because it won’t have the impetus to figure out anything until its source code is written. For instance, if you define “follow orders” in its source code to mean “follow the letter, not the spirit” of an order, that is what an AI will want to do. It will understand that’s not what the human giving it the order wants it to do, but it won’t care because it wasn’t programmed to do what it thought people wanted it to do, it was programmed to do what they literally said to do.

      If that explanation didn’t work, maybe an analogy based in genetics instead of computer science will make more sense to you. Let’s imagine you’re a geneticist trying to create a human genius who will benefit the world with their amazing discoveries. You successfully engineer a superintelligent human. However, you screw up and this superintelligent human thinks and acts exactly like Hannibal Lecter from the epynominous horror franchise.

      You attempt to instruct Hannibal Lecter to invent a new surgery that will save millions of lives. Instead he dismembers you and feeds you your own limbs after cooking them expertly. The problem in this scenario isn’t that Hannibal Lecter is stupid, he’s a genius. He clearly understands that you don’t want him to kill you. The problem is that Hannibal Lecter values the kinds of things Hannibal Lecter values (feeding people their own limbs) rather than the things you value (helping humanity). So he just does what he wants instead of what you order him to do.

      The problem wasn’t getting a human to understand your orders, it was getting a human who cared about following your orders instead of about eating you. AI risk is the same way. The problem isn’t getting an AI to understand your orders. It’s getting it to care.

      • Jeff H says:

        Computers right now do *nothing but* follow any orders they are given. Making a computer do that is a solved problem. I don’t follow why it would un-solve itself in a scenario with a highly advanced AI.

        Either a given machine has the limitations of current computers, which do only and precisely what you tell them to, or it doesn’t, in which case the whole discussion seems to be moot. If you’re going to pick and choose which limitations of current computers it does and doesn’t have, I’m going to need a better explanation than I’ve seen to date of the distinction between the limitations it has and the limitations it doesn’t. Preferably one that survives the objection that we don’t have the first clue how to actually program a superintelligent AI, much less have a precise specification of the principles on which such a machine would run.

        • TheAncientGeek says:

          MIRI hopes that rationality itself is a set of general principles that can spply to any future AI.

          The problems are that a machine can be dangerous without being rational; and that ideal rationality isnt computabl, every non ideal system being non ideal in its own way.

        • Dirdle says:

          Machines follow their programming, not their orders. The difference matters. Let me see if I can illustrate. The following trivial code accepts a string input, and then returns 1:
          def AGI():
          orders = raw_input("Please give me orders: ")
          # ... TODO AGI
          print 1
          # Maybe also improve the output feed

          When Ghatanathoah is talking about ‘orders,’ they mean the string you type in. Clearly when you use the word, you mean the program. However, these two kinds of orders are not treated identically by computers, for all that English supports the ambiguity. For instance, this program always obeys its programmed-orders, but always ignores its given-orders in favour of printing 1.

          The problem that they’re trying to describe is that, essentially, it’s easy to think “really super capable human” when told that an AI could be superintelligent. And obviously that covers understanding orders that you input. But there’s not, in fact, any requirement to include this capacity in an AI. And there’s not, unfortunately, any well-defined programmatic method of “understanding orders that are input” for the case of a general AI.

    • Deiseach says:

      I don’t know if that’s exactly the fairest way to put it: I think the people who believe Unfriendly AI is a genuine existential risk have a real (in their minds) reason to fear it could go wrong.

      Unfortunately, they also appear to have an exaggerated notion of what can happen if it goes right, if we get Friendly AI, and that’s where I get off the train.

      I think the risk is not so much the AI as the combination of humans and AI; it’s entirely possible we’ll be happy to let machine algorithms make more and more decisions for us, and we’ll require more and more sophisticated machines to run those so that eventually we’ll create human-level AI, but we’ll have such a competing tangle of requests and requirements (solve global poverty! but do it through free market capitalism! and without loss of American jobs! but keep wages down! and we want sustainable energy! but not by closing down oil and coal-fired plants because the job losses would be politically bad!) that we’ll cause the entire system to come crashing down.

      That is, I don’t believe AI of itself will have goals alien to ours, but rather that the goals we say we want will be so contradictory that the complications will get out of our hands very fast.

      I also do not believe in Fairy Godmother AI which will be so god-level intelligent it will magically solve all our problems, make us all happy, wealthy, healthy and long-lived, and the galaxy will be colonised by augmented humans who live happily ever after (for instance, the Fairy Godmother might tell us “Three-quarters of the world population are too poor, sub-high level IQ and of inferior genetic quality to be brought out of poverty and given healthy lives and engage in meaningful work. They have to be humanely euthanized so the remainder can have the resource-rich, improved, more human lives and bring forth the saner, healthier offspring of the future. There is no way a slum-dweller in a favela who scavenges rubbish-heaps can have a better life in the world as constituted, because there is no way to improve his life in a meaningful way. You can treat him as a pet, by taking care of his material needs, but you can’t make him smart enough or educated enough to contribute to the needs of the future world.” Would we face the logic of that and agree to it?)

      • Johannes says:

        Regardless of AI details, the chilling things are that a) for some consequentialist “Transhumanists” it would not only be justified to at least painlessly sterilize (but what if they do not comply…) the 75% cave/slumdwellers sub IQ125 (or in some later generation maybe sub 145) but it would be the “moral” thing to do because some pipe dreams of 10^50 superduperpost”humans” colonizing the galaxies would necessitate that the resources are focussed on the lucky 25% predecessors of those 10^50 rather than feeding said 75%.
        And b) that an open-minded person like Scott is reading Muggeridge and Chesterton on the early/mid 20th century “reformers” whose eugenic and socialist pipe dreams were comparably small scale and apparently does not realize the striking parallels to the actual transhumanist movement. Or realizes them and does not find them problematic because this time it’s the good and smart techie guys, not the evil commies.

        • Nita says:

          Wait, who’s proposing mass sterilization in the name of Transhumanism?

          • HeelBearCub says:

            @Nita:
            Deiseach, sort of, two comments above.

          • Nita says:

            Umm, no? Here’s how I’m reading it:

            Deiseach: We wouldn’t kill 3/4 of the population even if was the “logical” thing to do, right?
            Johannes: You and I wouldn’t, but consequentialist transhumanists might!

    • Scott Alexander says:

      I know other people have tried to explain already, but let me try.

      You’re working on a continuum between two things: order the AI to do a task vs. order the AI to do the thing that you mean.

      If you order it to do a task, then you can never get your orders specific enough to be safe. For example, “create paperclips but don’t destroy humanity” is compatible with creating paperclips but leaving one human alive. “Create paperclips without killing any humans” is compatible with putting all humans in permafreeze. “Create paperclips but don’t affect any humans in any way”, and you’re fine until you try to colonize Mars and notice it isn’t there anymore – and it also sacrifices the ability to create a million times more paperclips by doing something that gives one person a dust speck in the eye. This is what that “Hidden Complexity Of Wishes” post tries to explain.

      Saying “use your vast intelligence to figure out what we want, then do that” is the goal, but keep in mind that unless you program that in, it won’t do it. If you do program it in, then you’re good, but this is complicated programming. Here are some simple failure modes:

      – the AI realizes that on a neural level, all you want is dopamine in a certain part of your brain, and so it stimulates your brain with dopamine, making you super blissful, and otherwise ignores your opinions.
      – somebody Pascal-Mugs the AI, and taking their offer is, in a naive expected-utility way, likely to satisfy more of your desires than not taking it
      – the AI tiles the universe with whatever you want most at that moment, and ignores any changes in your desires since it was programmed to do what you want, not what the person inhabiting your brain and body one minute later wants.
      – the AI edits your brain so that you want exactly what the AI wants. Now it’s doing what you want!

      “Do what I mean” is little better. For example, if you say “make paperclips”, you don’t actually mean, in asserting this statement “make paperclips but do not destroy the planet Mars in the process”. It’s just a background assumption.

      Just saying “do what we want, not what we say” won’t solve any of these problems, unless you’ve already told the AI “intepret that command in the way we want and not the way we say”, which leads to an infinite regress. At some point we’ve got to solve the infinite regress by actually saying what we mean, which is hard.

      Also, the AI is going to be constantly editing its own code (the most plausible theory for how an AI becomes superintelligent is that it becomes moderately intelligent and bootstraps itself the rest of the way) so even if you specify it correctly the first time, you also have to be very sure that it stays correctly specified on version 1000.0.

      None of this is impossible, but it’s something that won’t happen unless people are working on ti.

      • Dan Simon says:

        I disagree. Figuring out “what I mean” when I give an instruction–at least to the extent of understanding really obvious implicit caveats such as, “don’t destroy any planets in the process”–is a task well within the capacity of very ordinary humans. A “superintelligent” AI–or even an “intelligent” one, by basic Turing test standards–should be able to handle it with ease.

        Now, perhaps you don’t think of intelligence in Turing test terms, but just in task-mastering terms, a la Deep Blue. In that case, I agree that we face real risks–but they have nothing to do with “superintelligence”. Fairly dumb battlefield robots, for example, will very soon be perfectly capable of wreaking spectacular (unintended) havoc if designed poorly. We’ll be confronting this and many other similar problems of automated systems malfunctioning with potentially lethal results long before we ever build a human-level intelligent machine, let alone a “superintelligent” one.

        (Note that once again the issue hinges to a large extent on the definitions of terms like “intelligence” and “superintelligence”. As I’ve repeated ad nauseam, I believe that far too little attention has been paid to fleshing out these definitions, and that once that fleshing-out occurs, most of the scenarios described by the AI risk community will melt away into nonsensicality.)

        • TheAncientGeek says:

          For me. the moral is that good NL is part of AI safety, hardas it is…aalthough its a lot easier than the quixotic project of turning morality into maths.

    • Nero tol Scaeva says:

      Unfriendly AI will do what we say, not what we intend.

    • ryan says:

      You left out another somewhat important detail. The people who eventually invent these super-advanced AI’s do so without also inventing fail safe mechanisms to protect against these risks. The likely explanation being that it simply never occurred to them there might be a problem. It’s not like anyone had ever brought it up before.

    • Daniel Speyer says:

      Would a fair summary of you post be:

      Once we have a truly superintelligent AI, we’ll be able to say to it “Use your superintelligence to understand what we want and then do that,” so worrying about things like CEV isn’t necessary.

      That is a viable approach to FAI, but it isn’t an easy one. There are at least two problems with it:

      * “What we want” turns out to be really complicated.

      * Always getting what we want now limits our ability to grow. The whole point of moral and personal growth is to want and do things we didn’t see coming. But if we don’t fix that at “now”, we run into the ugly continuum from “you will grow into liking this by the time it happens” to “values are easier to manipulate than circumstances”.

      But perhaps the most fundamental point is that this is not the obvious way to design an AI. The obvious design is to hard-code a limited desire, or to make it take orders. Including in the design that it should understand its creators’ desires is a non-obvious step and exactly the sort of thing AI friendliness advocates are pushing for.

      • Dan Simon says:

        Once again, we’ve run up against the definitional problem I keep harping on: what does “hyperintelligent” mean? If an AI can’t even pass the Turing test, then is it even intelligent, let alone “hyperintelligent”?

        Being able to pass the Turing test doesn’t necessarily imply understanding all the subtleties of our “true desires” (whatever that means). But that’s too high a bar–all we need is an understanding that our orders implicitly include caveats like, “don’t destroy humanity in the process”. An AI lacking that understanding can obviously be made to fail the Turing test, by giving it a question about whether an order to, say, make a lot of paper clips would be fulfilled by an approach that destroyed humanity.

        So we’re left with a definition of “superintelligence” that encompasses entities unable to pass the Turing test. What do they have to do, then to qualify as “superintelligent”? Does Deep Blue count, since it can play chess better than any human? A bundle of 23 Deep Blue-like machines, each of which can handle a particular task better than a human can? 46? 1,987? What if one of those machines can actually do something (detonate nuclear weapons, say) that actually could destroy humanity? Would that make it “superintelligent”?

        Again, I refer you to my blog post (search for “garis” on icouldbewrong dot blogspot dot com) for further elaboration.

        • Publius Varinius says:

          Once again, we’ve run up against the definitional problem I keep harping on: what does “hyperintelligent” mean? If an AI can’t even pass the Turing test, then is it even intelligent, let alone “hyperintelligent”?

          Your definitional problem is inconsequential.

          If someone creates a machine, which solves some optimization problem so effectively that humanity is destroyed in the process, it can’t even pass the Turing test, maaan will not matter at all.

    • fhyve says:

      No.

      One day, we will be able to create machines so intelligent that they will be billions of times more intelligent than humans, in the sense that they will be able to find solutions to questions so deep, subtle and complex that humans will in all probability never be able to solve them without help from such machines. Obviously we will find such machines extremely useful, build lots of them, and give them all sorts of problems to solve.

      We build an AI that is slightly better than us at creating AI. That AI then builds an AI that is slightly better than itself at creating AI. This process recurses to (perhaps very quickly) get an AI that is very capable of building better AI. That is how you get something that is billions of times more intelligent than a human.

      Then one day we will give one of these machines a very important problem to solve–averting environmental catastrophe, say–and the hyperintelligent machine will notice in passing that the optimal solution to the problem involves exterminating humanity. So the machine will set about exterminating humanity as a subgoal of its assigned goal. Because the machine is so hyperintelligent, it will easily find an efficient way to achieve this subgoal, and destroy humanity.

      From Stuart Russel:

      A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.

      So if one of these unconstrained variables measures some human value, bad things can happen. For example, it might do things like make humans go extinct. Or keep around one human experiencing pure pleasure, and with a modified brain to be able to experience more pleasure. Or it might tile the universe with things that minimally qualify as humans experiencing a minimally-value-as-programmed-satisficing experience.

      I’m not saying that these are things that will happen, but they are things that I come up with when I think of I would how maximize certain definitions of human value while explicitly trying to exclude other human values when they cost extra resources to satisfy. Because the AI hasn’t been programmed to satisfy those values.

      yet so staggeringly stupid that it will simply never occur to the machine that the problem we’ve given it implicitly includes the requirement that we not destroy humanity in the process–something that a normal ten-year-old child would be able to infer with no difficulty whatsoever.

      A program does what you program it to do, not what you intend for it to do. If you’ve never programmed something and watch it do something unintended when you run it, then think of those dolphins in the “Dolphins discover Goodhart’s law” link. Do you think they would stop tearing off pieces of paper in order to get more fish if they understood that the purpose of the reward was to get them to clean up their own tank? No. They would probably try harder to conceal their efforts to circumvent the trainer’s intentions. They want more fish, not a clean tank.

      Similar for the AI. It may learn an accurate model of human value. It may learn that you are giving it the task because you intend for certain things to happen. It may learn that destroying humanity is very much not a thing that you want, and that you’ve made a very bad mistake. But by forgetting to program those parts of human values into its goals, you’ve unintentionally programmed it not to care. It’s not ignorantly wiping out humans because it doesn’t realize that’s not what you want. Nor is it maliciously wiping out humans for whatever reason. It is wiping out humans because that is the most efficient way to complete the goals that you have given it.

      I encourage you to actually read Bostrom’s book Superintelligence. Or, at least download from that famous online Russian library it and skim some of it.

      • TheAncientGeek says:

        The argument tacitly assumes that the AI has one version of what humans want its goal system, and that it is unupdateable, and that it is capable of having a better version of what humans want in some updateable knowledge base,and that if it noted a disparity, it would do nothing.

        That’s an architecture specific argument, and the architecture in question isn’t at all inevitable or natural. AI rearchers are on record as finding it bizarrej

    • 27chaos says:

      I put AI risk at single digit percentages, but it’s pretty clear you’re being an asshole on purpose. Go fuck yourself.

    • Shenpen says:

      Yes – because Yudkowsky is saying your 10 year old has not only intelligence, but certain evolutionary instincts as well, which the AI does not have. And in this case evolved instincts come rather handy. Evolved instincts can be highly immoral, as in “let’s exterminate this competing tribe”. But evolved instincts really tend to stop at “let’s exterminate all humankind, including ourselves”.

      So as the human child has his intelligence on top of his instincts, Yudkowsky wants to give instincts to the AI.

      But the AI can rewrite himself, including the instincts, and this is really where I am lost. I have no idea what Yudkowsky actually wants.

    • Eli says:

      The problem with this layperson’s explanation is that it uses the terms “intelligent” and “stupid” describing the strength or weakness of different cognitive processes. It’s entirely possible that a machine could be very good at one cognitive process but very bad at most others, in much the same way that, for instance, tone-deaf people have to expand immense amounts of effort to use their intellect to learn music, where those of us with “nicer” tone distinction in our brains can “just hear it”.

      A more serious and technical problem is also much more obvious than the parable: we do not know how to precisely specify tasks (reward functions) for current autonomous reinforcement learners (“AIs”) such that we can know, ahead of time, what sorts of subgoals will be entailed or prescribed once we feed task X to program Y. If we can figure out how to do so, in such a way that we can look at the tasks we set our machine reasoners ahead of time and definitively say, “Yes, it’s clear from these lines in the console output that this task will/won’t have any counterintuitive side-effects we might not want, such as crashing our data-centers, eating the contents of our bank accounts, or destroying mankind”, then the risk of a dangerous accident involving AI goes far down.

      But we don’t know how to do that yet.

    • Mark says:

      I think Dan is right. For example, the AI presumably has some form of motivation function – in order for it to be in any way effective in the real world (for either good or ill), it is essential that it isn’t able to alter its own motivations, or choose its own sense data. Scott makes the point that unless we are careful an AI might change into a dopamine optimizer that leaves all humans as drugged up zombies – surely it’s more likely that it will turn into an optimizer of whatever it is the AI derives *its* satisfaction from and turn itself into a drugged up zombie (or simply turn itself off). If we can find sufficiently good wording to solve *that* problem (including wording that enables us to differentiate between *the real world* and mere sense data (not so easy)) then why on earth shouldn’t we be able to find the wording (or physical measures) to stop it from eating us?

      Another problem: if AI is to be an effective process, where is the selective element going to come from? If it were to be a real world evolutionary process, we would have to be creating millions of AI’s waiting for the one that was going to kill us – if selection of paths takes place entirely internally, on a logical level, it doesn’t seem at all clear to me that what emerges will in fact be effective in the real world.

  9. LTP says:

    “Sam Altman, “head of Silicon Valley’s most important startup farm”, says that “if I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Meanwhile, on my blog, people who don’t have a day job betting fortunes on tech successes and failures continue to say they’re 99.9999999% sure that even $1 million is too much.”

    This guy’s domain of expertise doesn’t make him credible on this. He is good at seeing what products and business will make money in the near term. I’m not sure why I should listen to him on this at all. You can find successful rich people who support all kinds of ideas, doesn’t mean they’re right.

    EDIT: But to a more serious point jumping off of this, I’ve been thinking about situations when I see an expert’s immersion in a field/subfield as lending the credibility to their opinions on certain issues related to their field, and others when I see their immersion as detracting from their credibility on those topics. I can’t really think of a good consistent rule for this.

    In some cases, I’m very suspicious, because I think experts in a field will be irrationally biased towards a position on some topic related to their field due to a mixture of selection affects and self-interest. For instance, philosophers of religion are much more likely to believe in God than other philosophers. At first, this should lead one to think there are strong arguments for God. But, one realizes that it is a selection effect, i.e. religious people are much more likely to be philosophers of religion than atheists. Or, as another example, I’m less likely to trust string theorists on the philosophy of science issues around string theory because they’ve built careers around string theory and probably are incentivized to see it as legitimate.

    In other instances, though, I’m more likely to trust an expert (though I can’t think of any off the top of my head, though probably because trusting experts on such issues feels very natural and automatic).

    I can’t really think of a rule to deal with this in a reasonable way. I fear that it is ripe for confirmation bias and irrational thoughts (for instance, I think of of climate change deniers who claim that climate scientists are lying out of self interest). On the other hand, I often think suspicion is warranted, and many experts are taken too much at face value.

    • “This guy’s domain of expertise doesn’t make him credible on this.”

      His domain of expertise does suggest that he is reasonably well calibrated and does not make ridiculous probability assessments… unlike anyone claiming 99.9999999% about AI risk.

      • LTP says:

        “His domain of expertise does suggest that he is reasonably well calibrated and does not make ridiculous probability assessments… ”

        Okay, but I can say that about any opinion any successful venture capitalist has, no?

        And if you read the article he talked about hyperintelligent AI as a virtual certainty, he didn’t really hedge at all. Seems like a “ridiculous probability assessment” to me!

        • Nyx says:

          I don’t think people hedge on hyperintelligent AI because it is almost a certainty. What is not certain is, e.g., orthogonality thesis or foom.

          We know there is an algorithm (software) running on brains (hardware). Both the software and, definitely, the hardware are many orders of magnitude slower than they could be. Therefore, intelligence many orders of magnitude smarter than people (hyperintelligent) is like definitely possible.

          • LTP says:

            “We know there is an algorithm (software) running on brains (hardware). ”

            No we don’t. This is a contentious position with no empirical evidence. It may well be a category error to think of the brain in terms of computers at all, due to a misleading analogy.

          • “It may well be a category error to think of the brain in terms of computers at all”

            Doesn’t matter. We know that there are physical systems capable of human-level intelligence — human brains are such systems. If they work in ways that are a bad match to today’s computers, that’s an engineering difficulty that may require hardware substantially different than today’s computers, but it is not a fundamental obstacle to AI.

          • Dan Simon says:

            No, but it may well be an obstacle to “superintelligence”. Once you’ve used the human brain as the paradigm for intelligence, you can no longer necessarily claim that there’s an intelligence scale by which something or someone can be a hundred or a thousand times more intelligent than a human brain. (Indeed, I would argue that virtually all intuitively satisfying definitions of intelligence boil down to, “like an ideal human”, and therefore that the concept of superintelligence is simply nonsensical.)

          • vV_Vv says:

            I don’t think people hedge on hyperintelligent AI because it is almost a certainty.

            It is not.

            We know there is an algorithm (software) running on brains (hardware).

            There doesn’t seem to be any clear distinction between hardware and software in the human brain.

            Hypothesizing a physical system many times smarter than a human brain is speculative extrapolation.

          • HeelBearCub says:

            @vV_Vv:

            There doesn’t seem to be any clear distinction between hardware and software in the human brain.

            If you were raised by, and never left, a tribe Kalahari Bushmen, would you know algebra right now?

          • vV_Vv says:

            If you were raised by, and never left, a tribe Kalahari Bushmen, would you know algebra right now?

            Knowledge of algebra is not software in any usual meaning of the term.

          • HeelBearCub says:

            @vV_Vv:
            You will have to expand on that quite a bit. You seem to be claiming that knowledge of algebra is usually considered either hardware or data, neither of which I follow.

          • Nyx says:

            Again, people are just disputing definitions (you have different definitions of software and hardware than me). This isn’t an interesting criticism.

            By software, I mean the idealized version of whatever it is that a brain does, as boolean logic is the idealized version of what a computer does. E.g. you have units (neurons) that interact with other units (neurons) using some specified rules (i.e. not magic), that together give rise to *the generalized process of turning experience into actionable knowledge*.

            By hardware, I mean whatever actual physical thing is implemented to do this process, as vacuum tubes were an (inefficient) implementation of idealized boolean logic gates. In the case of the brain, this is neurons or whatever other physical thing you think is involved in the process I describe above as software.

            The brain is a (definitely, obviously inefficient) implementation of the idealized process of *information processing that turns simple inputs (experience) into actionable, general-purpose knowledge*. This isn’t disputed by any relevant (neuroscience, AI) experts, unless you count religious authorities. It is almost certainly possible to make *several orders of magnitude* improvement in this physical implementation (hardware) of the idealized process (software), when you see how inefficient the current implementation is (we’re likely using whole cells as the base unit!?!).

            And that’s just hardware. The idealized process (algorithm) itself might be improved, as it seems clear that tremendous gains are possible from software differences (both “Einstein” and “village idiot” [and “llama”] run on very similar hardware. It’s the process (software) by which those units interacted or were organized differently that is changes the intelligence produced (i.e. software gains). Unless you think we just have really fast or larger llama brains resulting from hardware improvements. This seems unlikely, our hardware is (neurons are) quite similar to other mammals. It’s the organization/arrangement/process that is different, but even this is only a slight difference. aka much bigger gains likely possible).

            So superintelligence is a likely result from hardware improvements (billions of Einsteins on a computer every second) or maybe from software improvements (llama is to Einstein as Einstein is to ???*superintelligence*)

          • vV_Vv says:

            @HeelBearCub

            Knowledge of algebra is data. Software is generally defined as executable data: a program for an universal (up to resource constraints) Turing machine. Knowledge of algebra is not executable.

            @Nyx

            It is almost certainly possible to make *several orders of magnitude* improvement in this physical implementation (hardware) of the idealized process (software), when you see how inefficient the current implementation is (we’re likely using whole cells as the base unit!?!).

            Well, the human brain has a computing power estimated to be similar to that of the largest modern supercomputers, which have an energy consumption literally a million times larger. Doesn’t sound that bad for a thing made of whole cells, does it?

            And that’s just hardware. The idealized process (algorithm) itself might be improved, as it seems clear that tremendous gains are possible from software differences (both “Einstein” and “village idiot” [and “llama”] run on very similar hardware.

            The village idiot isn’t a fair data point since they have a defective brain and there are more ways to break something rather than improve it.

            Einstein did indeed have a bigger brain than a llama.

            The proper comparison would be between Einstein and an average human. I don’t think we can say that Einstein was much smarter than an average human compared to how much an average human is smarter than, say, an average chimp or llama.

            So superintelligence is a likely result from hardware improvements (billions of Einsteins on a computer every second)

            Billions of chickens on a computer every second is not a superintelligence, it’s just a very crowded simulated chicken farm. Similarly, a chicken speed up a billion times is not a superintelligence, it’s just a very bored chicken.
            Why would it be different with Einsteins?

            or maybe from software improvements (llama is to Einstein as Einstein is to ???*superintelligence*)

            It’s not clear that there are huge “software” (that is, structural) differences between a llama brain and Einstein’s brain, and whatever the differences are, there is no reason to believe that there is much room for improvements just by “software” modifications. There are only so many low-hanging fruits.

          • “No, but it may well be an obstacle to “superintelligence”.”

            As I’ve mentioned before, only half joking, John von Neumann is the existence proof for the possibility of a physical system that exhibits superintelligence.

            We can go further. Groups of humans are also physical systems. Imagine gathering together the top human minds in *every* realm of human endeavor. You now have a physical system of astonishing intellectual versatility that far outperforms what any single human being can do.

      • Deiseach says:

        Okay, but are those of us saying “A million may be too much” really claiming 99.9999999% surety about AI risk?

        I don’t know if that’s a reasonable way to put it. Some may indeed have said “I believe the likelihood of AI risk is so little that 0.0000001% would be a better representation of the estimate I give”, but I think what most of us are objecting to is not so much the idea of AI risk (of course there’s risk involved, any human endeavour can go wrong) but rather the scale and pace of it.

        That AI will jump from sub-human level to human-level to above-human level so fast that we can’t intervene or control it; that this super-human AI will have such absolute and global control of so much of human activity that it can credibly be a civilisation-ending risk; that we will be so stupid as to let such a degree of control pass out of our hands (and sure, my motto is “humans: we’re stupid” so this to me is the most credible part of it); that the AI will have goals and volition of its own to seek those goals in opposition to us.

        And the flip side – that if we get it right, if we create Friendly AI, we (or at least potential quadrillions of our descendants) will all live happily ever after in the post-Singularity utopia overseen by our benign Fairy Godmother.

        As SF or as philosophical speculation, this is perfectly fine. As a seriously put-forward plan of action requiring the focusing of all our attention, effort and donations right now or else – I remain to be convinced.

      • Nathan says:

        Donald Trump also made a lot of money correctly predicting future values of investments.

        Just saying.

    • Geoff Greer says:

      This guy’s domain of expertise doesn’t make him credible on this.

      Though he left college to start a company, Sam Altman studied computer science at Stanford, specializing in AI[1]. He’s no PhD, but his domain expertise is better than most people who talk about AI risk.

      Also, instead of getting snippets of his thoughts third-hand, it’s probably best to read his own writings on the subject.[2][3][4]

      1. http://www.ycombinator.com/people/
      2. http://blog.samaltman.com/ai
      3. http://blog.samaltman.com/machine-intelligence-part-1
      4. http://blog.samaltman.com/machine-intelligence-part-2

      • veronica d says:

        Right. But he isn’t more qualified than some of the skeptics here. For example, I’m a Lisp programmer. I work at {big tech company that does tons of ML}. I’m (more or less) current on the topic. I read big complicated math books for fun.

        Now, I’m not saying I’m “all that,” certainly not compared to many others on this forum. I’m perhaps “above average” here, but not much more than that. But my point is, I’m at least as qualified as that guy, at least on this topic [1].

        Anyway, I think “Foom-like” scenarios are preposterously unlikely. P is probably not NP. Non-convex optimization problems are super hard. Humanity’s failures come not from wicked super geniuses, but from basic social entropy.

        The smartest person in the room is seldom the most successful. Ever notice that.

        [1] On the other hand, don’t ask me how to invest your money. I’ll tell you to put it into something involving type theory or Haskel or whatever. Your chance of profit would be low.

        • Soumynona says:

          Anyway, I think “Foom-like” scenarios are preposterously unlikely. P is probably not NP. Non-convex optimization problems are super hard. Humanity’s failures come not from wicked super geniuses, but from basic social entropy.

          The lack of foom decreases the urgency and the consequences of failure but it doesn’t really make AI safety a non-issue.

          The smartest person in the room is seldom the most successful. Ever notice that.

          If you restrict the concept of smartness to nerdy book-smarts. The slick guy in an expensive suit who’s richer than you because he’s better at climbing ladders, shaking hands and cozying up might not trigger your smartness detectors but all of those things he’s doing, he’s doing with his brain.

          • HeelBearCub says:

            Foom is the only scenario in which AI risk is a pressing concern now. Under non-Foom scenarios, you study AI risk alongside the developing AIs.

            It’s a bit like studying the risks of combustion engines when it’s 1600 and our idea is Gunpowder Engines. Sure, a modern chemist can look back and see “but, if those become ubiquitous, it will lead to global warming!” but back then they would be far more concerned with safe storage of gunpowder.

          • veronica d says:

            HeelBearClub has it right. Few will deny that AI risk is something that someday might be a problem in some capacity, but without FOOM! you don’t have the pressing existential risk. Foom! is the big danger that drives this conversation.

            Don’t shift goalposts. Saying “We should show thoughtful concern about the capacity of large-scale machine learning and other AI technology to cause harm” is one kind of thing. “The new AI masters will destroy our world” is something else.

            Arguing for one does little to establish the other.

          • HeelBearCub says:

            “HeelBearClub”

            Ooooh, I don’t know about this. I’m not sure I would ever be in a club that would have me as a member.

          • veronica d says:

            Oops. Sorry 🙂

          • HeelBearCub says:

            @veronica d:
            ;P

        • Geoff Greer says:

          Credentials and expertise can be a useful heuristic, but it’s odd to be so specific with them. Do you also ignore Nick Bostrom’s views because he doesn’t write machine learning algorithms in Lisp? At some point, one has to address the object-level. Did you read Altman’s blog posts on AI safety? Is there anything in his core arguments that you disagree with?

          If you’re going to dismiss Altman’s views based on his expertise and intelligence, you’re setting a higher bar than you might think. Y Combinator writes their own software for internal usage, mostly in Lisp and often using machine learning. Heck, the founder of YC wrote a book on Lisp[1], wrote his own dialect of Lisp (Arc), and wrote Hacker News in that dialect. Then he named Altman his successor, stating, “Sam is one of the smartest people I know…”[2].

          In his current role, Altman can’t spend a ton of time writing code, but he still does some programming. More importantly, evaluating companies often requires becoming an expert in their field. For example: to figure out which nuclear energy companies to invest in, Altman had to learn quite a bit of physics.

          Considering all that, I’m gonna go ahead and say that if Sam Altman isn’t worth listening to on this topic, very few people are.

          1. On Lisp.

          2. http://blog.ycombinator.com/sam-altman-for-president

        • Robert Liguori says:

          Tangent topic, since you bring it up: Is there a Rationalist’s Investment Group anywhere? I mean, I assume that we all are (to the extent that we are investing) investing in a diversified portfolio of low-cost mutual funds appropriate to our risk tolerance and age, but that’s just based on a small sample size.

          Is this something that there’s a discussion around somewhere?

          • drethelin says:

            generally when people bring up this sort of question on Lesswrong.com a bunch of people recommend Vanguard

    • Deiseach says:

      We’ve discussed this with the “contrarian cluster”; when do you listen to the experts over the maverick and vice versa, and does someone being really, really good in their field (particularly when it comes to challenging the received wisdom and being proved right) have some kind of general ‘correctness factor’ that can be applied to areas outside their expertise?

      I don’t think – from what I’ve gathered – that Mr Altman is an expert on AI risk, so basically his opinion is the same as “would you ask a top neurosurgeon about interior design?” He’s certainly a successful businessman, and one who can identify good tech prospects, but that does not necessarily mean he has anything more than an informed amateur interest in AI risk.

      • HeelBearCub says:

        @Deiseach:
        Would you think a top neuro-surgeon should be generally trusted in their assessment of the most important areas to research in physical therapy after orthopedic surgery, might be the more apt analogy.

        • Deiseach says:

          a top neuro-surgeon should be generally trusted in their assessment of the most important areas to research in physical therapy after orthopedic surgery

          Not necessarily, at least since my run-in with a consultant cardiologist who waved my questions about the circulation in my legs off impatiently with “That’s the vascular surgeon”.

          Heart is one thing, circulatory system is another, apparently, and nothing to do with one another at all 🙂

          • HeelBearCub says:

            Oh, I was agreeing with you.

            I was just saying the analogy is closer. They superficially look like they are “close enough as to be the same” when they really don’t have much going for them in terms of identifying problems specific to the others area of expertise.

        • Wrong Species says:

          The thing about AI risk is that that we have no knowledge of what to expect. There are no experts. Just because someone has a career in identifying tech startups does not mean that they know what they are talking about when it comes to AI.

    • Earthly Knight says:

      But to a more serious point jumping off of this, I’ve been thinking about situations when I see an expert’s immersion in a field/subfield as lending the credibility to their opinions on certain issues related to their field, and others when I see their immersion as detracting from their credibility on those topics. I can’t really think of a good consistent rule for this.

      There is a good, consistent rule, and you probably already have a decent sense of what it is. The only problem is that it sometimes requires a substantial amount of research and a superhuman degree of dispassion. The rule is, roughly: prescind away from whether you agree or disagree with the putative expert on any of the topics in her putative area of expertise, and evaluate her on the basis of general factors like her credentials, her track record, how broadly her view is supported by other experts, whether she has any conflicts of interest, and whether you detect any sociological biases at work in her discipline. The important part is the prescinding.

    • Eli says:

      You have assembled a Fully General Counterargument: “If he’s a domain expert, it makes him biased. If he’s not a domain expert, he’s ignorant.” Stop that.

  10. haishan says:

    That “fake CIA agent” story, while excellent, is a couple of years old.

    • I really don’t understand how that happened. Supposing for the moment that he really did work for the CIA (they denied it, so it must be true) why would the EPA keep paying him? Is it common practice for random government entities to pay the CIA’s employees?

      (I can kind of see them letting him take lots of unpaid leave, but paid leave? Seriously?)

  11. Sniffnoy says:

    Would you mind fixing the “Neural Algorithm of Artistic Style” link to point to the abstract rather than directly to the PDF? Thank you!

  12. Jiro says:

    Islands are often on sale at cheap prices (cheap being a relative term–as you point out, $5M is cheap for the area, but most of us don’t have that much money). They’re inconvenient because of the difficulty of transportation there, and the lack of electricity, power, garbage collection, etc, which drives down the price.

    • Andrew M. Farrell says:

      Google (which has some waterfront property on which they could build a dock, no?) or some tech company could buy the island and fill it with apartments and run a shuttle for their workers. Oracle already has expertise in watercraft if I recall.

      • Deiseach says:

        The island looks fairly small and as if there has not been any building on it. I suppose if you wanted to run up a dock and a boat shed, it would do for fishing(?) but I can’t see building any kind of house, even a small holiday home, on it – no running water, the problems with sewage (I don’t really see them permitting you to discharge waste into the bay), no power, etc.

        Maybe as a folly, sure, but you could blow five million on more convenient conspicuous consumption 🙂

      • Jiro says:

        Yes, Google could fill the island with apartments, but the fact that they have to pay a lot of money to develop the island in the process reduces the market price of the island.

      • Nornagest says:

        Red Rock has really inconvenient geography — it has basically no flat ground, and no beaches big enough to land heavy equipment. It’s small, only about six acres. And it’s in an awkward part of the Bay from a transport perspective — we’re talking an hour’s ferry ride to the terminals in San Francisco. (There are currently no ferry terminals in the South Bay or the southern Peninsula, though there are tentative plans for one in Redwood City.) It’s very close to the Richmond-San Rafael bridge, but that does no good — you can’t add an offramp in the middle of a bridge like that without spending something like a billion dollars.

        Google probably could fill it with apartments, or rather one largeish apartment complex, if it wanted to — it has the money to get over these obstacles. But for the money it would take, Google could build a lot more apartments in an area much closer to Google offices, even if it meant sinking pylons into Bay mud down near Stevens Creek or something and building on those. And Google has so far shown little inclination to play techie slumlord on a large scale.

        (Which is a shame. My rent would be lower if it had.)

    • Anthony says:

      The obvious thing, given the location, is for Eliezer to buy it, and create a Less Wrong compound on it. Of course, it would be a step down, since most of the island is in Richmond.

      Oh – the island has its own Wikipedia page.

  13. Seth says:

    “Meanwhile, on my blog, people who don’t have a day job betting fortunes on tech successes …”

    Let me see if I understand this. A man who has a job with an aspect of turning large amounts of publicly funded research into large amounts of private profit, thinks that large amount of publicly funded research should be done close to a field where there is a large amount of private profit to be made. Alternately, people on your blog who have no such financial incentives this way – indeed, a skepticism about the financial motives here – say such an investment is not an efficient way of producing the stated results, but sounds like a great way for involved parties to profit.

    And you believe the former, but not the latter? I think I’m missing something about rationalism.

    Ah … always follow the money (from the original article):

    “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” [that’s funny].

    “Altman shared that he recently invested in a company doing “AI safety research” to investigate the potential risks of artificial intelligence.” [Cui bono!]

    • Who wouldn't want to be Anonymous says:

      Exactly this. He wants the government to pour insane amounts of free money onto AI research companies so he can make a killing. He doesn’t give a damn about AI risk.

      • MicaiahC says:

        Um, do you realize that AI risk initiatives means that his existing bets on AI probably goes to zero?

        I mean, seriously this is dumb, he could say *anything* about dumping money into something and you would arrive at the exact same conclusion.

        • Who wouldn't want to be Anonymous says:

          Um, do you realize that AI risk initiatives means that his existing bets on AI probably goes to zero?

          Um. I don’t think that follows.

          You are assuming that:
          A) He has made significant AI bets
          B) That AI safety research will have an effect of significant magnitude on existing AI companies within a reasonable time horizon
          C) The sign of that effect is necessarily negative

          We’ll skip over A because it isn’t really that interesting. I think B and C are both false.

          There is no reason to believe that AI (safety) research will have any effect, much less a negative effect, within time horizons that are short enough that a savvy investor couldn’t react.

          So tell me, how, exactly, does AI safety research impact, say, a company developing better AI to preprocess Google Street View images and LIDAR maps to build a better self driving car? Or a better predictive keyboard? Or better image recognition software? Or a scientific paper vetting neural net? Or anything AI companies might actually be doing right now.

        • Deiseach says:

          Um, do you realize that AI risk initiatives means that his existing bets on AI probably goes to zero?

          If he’s involved with companies working on AI risk, and those companies are likely to get established as “Government will consult these for implementing national standards governing AI”, then he’ll be on the pig’s back where betting on AI is involved. Any kind of “inside knowledge” on what the likely limits on AI may be means that he’ll be in a better position to evaluate “Nah, University of Bongoville’s SmartBot is going nowhere, that’ll never pass the new regulations!” while “Big Brainz plc are ticking all the regulatory boxes and jumping through all the hoops, their KleverKlogs AI is the one to invest in”.

          I know we’re all leaping to the conclusion that this is not pure disinterestedness on his part, but excuse our cynicism: when a businessman says “The government should spend money on – ” the corollary of that is generally “- getting some of that into my hip pocket” 🙂

        • Nornagest says:

          I wouldn’t say that AI risk initiatives are likely to seriously depress current-gen AI applications — except for some blue-sky stuff that’s extremely far from product and generally undertaken by companies well out of venture capital, commercial AI work isn’t even trying to produce something with any remote potential to be unsafe. (In the existential sense, at least; there’s some potential for only mildly exotic engineering safety problems.)

          But by the same token I’d say that AI risk research, if it’s anything like MIRI’s stuff, has very little application to commercial machine learning, and so the “make a killing by leeching off public funding” intimation doesn’t work either.

    • John Schilling says:

      Even giving him the benefit of the doubt on motives, he’s advocating two of the classic blunders in research funding. First, proposing to expand research in a particular field by more than an order of magnitude in less than a decade, and second, promoting an end goal without proposing a specific path to that goal.

      These are particularly dangerous in combination. A massive funding push runs into the problem that there aren’t enough qualified people and institutions to absorb the funding, but you can sometimes channel that into related fields if you know specifically what you are trying to do. The Manhattan project had a very specific proposal on how to Kill Lots of Nazis (er, Japanese? Commies, maybe?) Real Soon, so the budget in excess of “hire every competent expert in nuclear fission” could be put directly to work on e.g. plutonium metallurgy. “A hundred billion dollars for AI safety research, right now”, is more like the Strategic Defense Initiative. Remember how SDI carefully avoided squandering billions on every loony idea that crawled out of the woodwork, and permanently retired the risk of a civilization-destroying nuclear apocalypse through the sheer awesomeness of Scientific Research?

  14. Douglas Knight says:

    I once went to a wedding on the grounds of the Delaware governor’s mansion. It seemed like a mansion to me, but I can’t remember any details.

    Looking for other sources, it seems like it’s actually a lot bigger than the picture makes it look. It’s a normal simple boxy house, scaled up so that the doors and windows are ridiculous. Also, the other side has a nice porch. But it probably is smaller and less interesting than the rest.

  15. Rob says:

    How has no commented on the fact that Scott Adams, in the same article he claims Trump has a 98% chance of being elected, also claims that
    “In other news, for several years I have been tracking a Master Wizard that I believe lives in Southern California. It seems he has trained a small army of attractive women in his method. The women create a specialized style of porn video clips that literally hypnotize the viewer to magnify the orgasm experience beyond anything you probably imagine is possible. Hypnosis has a super-strong impact on about 20% of people. And a lesser-but-strong impact on most of the rest.”

    Does anyone else have any background on what he is even talking about? If not I’m inclined to treat both of his claims as coming from a raving lunatic.

    • Chris says:

      If I had to venture a guess, it means Mr. Adams has spent a disconcerting amount of money on pornography.

      • Scott Adams says:

        Is there a wrong amount to spend on porn? I would hate to accidentally make that kind of mistake.

        • Chris says:

          Glad you asked, Scott. I wouldn’t want you to make mistakes like this either. The answer is that like all things in personal finance, it will vary with the person.

          A penny is too much for me to pay given the state of my finances and how much good quality porn can be had for free. If a hypothetical person has $75MM, the scale may well tip towards paying for quality porn.

        • So Scott are you going to take up anyone on offers to bet about your claim. Heck, I’d be perfectly happy making it an even wager say for $50.

    • drethelin says:

      In his recent series of blog posts he has been referring to people following a certain school of social manipulation and hypnotism as wizards. Among the people advised by this school are Bill Clinton and Donald Trump

    • Gunther says:

      Scott Adams has a weird position in the rationality community – he believes a lot of crazy nonsense and isn’t shy about blogging about it, yet somehow he’s treated as a member of the in-group. I’ve seen plenty of rationalists politely ignore his crazy side and quote or link to his saner posts.

      Possibly he’s just good at sounding reasonable? Or is it because he’s moderately famous and that translates to higher status?

      Whatever the reason, it’s always struck me as odd.

      • suntzuanime says:

        It’s RationalWiki that tries to identify all the crazy nonsense and then exclude anyone who blogs about it. I think it says good things, not bad, about this rationality community if we are willing to link interesting and thoughtful posts even if they’re posted by someone who has also posted things we view as “crazy”.

        • Gunther says:

          OK, but we wouldn’t extend this praiseworthy tolerance to anyone else.

          If Robin Hanson made a blog post tomorrow about how he was hanging plaits of garlic from his rafters to scare away vampires or if Luke Muehlhauser declared he’d cured AIDS by drinking his own urine, we’d stop treating them as members of the in-group pretty darn quickly.

          Scott gets a pass despite having beliefs that are at least as wacky as either of those. It’s clear we’re dealing with an exception here and I’m curious as to why it’s an exception.

          • suntzuanime says:

            I dunno, Eliezer Yudkowsky has said some pretty bizarre things and we haven’t stoned him to death either. Doesn’t Leah Libresco drink blood because she thinks it will make her live forever? I’m not sure your factual claims are, like, true, at all.

          • Gunther says:

            It might just be me, but I haven’t seen any other respected figure in the rationality community get away with saying the kind of stuff he says on a regular basis.

            Yudkowsky has some odd ideas but I don’t think they’re crazy-odd, just unusual-odd. It’s not like he’s ever claimed evolution is “bullshit”, or that he can alter reality by thinking about it really hard, or any of the other New Age crap Adams buys into.

            As for Leah Libresco, I hadn’t heard of her before, and after googling; she’s a Catholic blogger who likes Yudkowsky? I am unsure of the relevance.

            (edit) OK, after reading a little bit more about Libresco, it seems she was an atheist blogger who converted to Catholicism a few years back, for fascinatingly unusual reasons. That said, considering her a respected figure in the rationality community is a bit of a stretch, considering I’ve been reading various rationalist blogs for years and I’ve literally never heard of her before now.

          • suntzuanime says:

            I am absolutely certain Yudkowsky thinks you can alter reality by thinking about it really hard. Otherwise he wouldn’t be trying to teach us to.

            I feel like you’re equivocating between “ingroup member” and “respected figure”: one is much much stronger than the other.

            Leah Libresco is an ingroup member who is also a catholic blogger, which involves some crazy-odd ideas.

          • Anonymous says:

            >Scott gets a pass despite having beliefs that are at least as wacky as either of those.

            I don’t think his beliefs are as wacky as all that.

            (At least not recently expressed ones – he did have some thing about wishing reality to go the way you want in his older books, but I haven’t seem him talk about that lately).

            edit: FWIW while I find Adam’s blog entertaining sometimes, I also feel like he doesn’t value intellectual honesty enough to quite be a rationalist as such.

          • Gunther says:

            @ suntzuanime

            I’m not trying to be a dick here, but I find your assertion that the rationalist community is tolerant of crazy viewpoints extremely unconvincing when the craziest viewpoint a tolerated in-group member has that you can point to is mainstream Catholicism.

            …And you felt the need to make fun of the in-group member for believing it. Think about that for a minute.

            @Anonymous

            Let’s just agree to disagree on how crazy you’d have to be to call evolution “bullshit” and spout a bunch of new-age woo about how wishing really hard shapes reality.

          • Protagoras says:

            Dilbert used to be funny, and still is sometimes (and is perhaps more so to people who work in the kind of office environments it focuses on). It also presents nerd culture sympathetically. I’m sure that explains why a lot of nerdy types are willing to be charitable in return.

          • cewr says:

            Scott Adams sneaks through our filters by being a Scott A. with a blog. Alexander and Aaronson are great so…

          • HeelBearCub says:

            @Gunther/@suntzuanime:

            I find it interesting/odd that you are concentrating on the odd ideas of people in the in-group.

            The real questions (to me) are:
            – Does the rationalist community treat odd views of people in the out-group differently than those of the in-group?
            – What qualifies as an odd view to people in the rationalist community?

            I think we would find that those two points are intertwined. Is NRx an “odd view”? What about the view that social opprobrium is a proper way to influence group behavior?

          • Anon says:

            >Doesn’t Leah Libresco drink blood because she thinks it will make her live forever?

            What?

          • youzicha says:

            How about Muflax, or Will Newsome? They were solid in-group members despite saying some pretty outlandish things.

            I’m not suprised the Scott Adams is popular among less-wrongers, because his taste in ideas seems exactly like the LessWrong mainstream: self-help advice based on simple models of cognition (re-enforcement training/hypnosis); doubting whether the scientific establishment is correct (dietary science/evolution); quirky libertarianism. I think Adams is different from the median LessWronger in that he less interested in checking whether the things he spouts of are actually true or not, but on the other hand, isn’t that kindof true for Robin Hanson also?

          • Zorgon says:

            Catholicism.

          • Anonymous says:

            ^Anon, he’s (somewhat snarkily) referring to communion, drinking “The blood of Christ”. Leah is Catholic, and so believes the wine is literally transformed into the blood of christ (But not physically, to get at the distinction take a look at the wiki page for “Philosophical accident”). I was confused when I first read it too, but he’s really just saying “Leah Lisbresco is a Catholic”.

      • Scott Alexander says:

        1. When I was growing up, his comic was the only one in the newspaper that was really, genuinely funny, and helped shape my own sense of humor for the better (in fact, now that I think about it, I think his humor is notable for technique in the same way he says Trump’s speeches are notable for technique).

        2. I don’t see anything that weird he believes (other than 98% chance Trump). His hypnotism stuff is excessive, but a lot of people I respect (including Brienne) are into hypnosis, and a lot of psychiatrists use it as a big part of their practice. My guess is that there are a very small number of hypnotists who at least come close to living up to Adams’ expectations of the field.

        3. He needs to be recruited as a member of the Vast Scott A Conspiracy along with Scott Aaronson and myself.

        • walpolo says:

          You must’ve grown up after Calvin and Hobbes and the Far Side ended. Compared with those, Dilbert is weak sauce.

          • Nornagest says:

            Calvin and Hobbes is a national treasure, but except for the biology jokes I’ve found The Far Side hasn’t aged quite as well.

            (I might still rank it above Dilbert, though.)

        • Anonymous says:

          Adams has claimed, among other things, that evolution will be disproved in his readers’ lifetimes, that affirmations work, that dyslexia may be due to a (nonsubjective) nonlinear experience of time, and that gravity could be due to all matter constantly expanding and there exists no evidence against this possibility. That looks a lot less like “healthy skepticism of models” and a lot more like “inability or unwillingness to evaluate models, internally or externally, before deciding whether to believe them”.

          I like Dilbert too, but seriously.

          • I dunno … he’s also open about the fact that he writes to entertain, not to inform. You can’t reliably determine his actual beliefs from what he writes.

            (IIRC, and IMO, the evolution thing isn’t as quite bad as it sounds – he’s just using the word “evolution” in an unusually narrow sense. For example, I think he once mentioned a proposed ToE that involves holography as something that would disprove evolution if widely accepted, on the grounds that if reality is 2-dimensional and the Earth is just a sort of holographic projection then evolution is really real. Kind of true, you see, if your definitions are narrow enough. And I’m about 80% sure that he only says that sort of thing to annoy people.)

            I do suspect that he actually believes that meat is bad for everyone because he personally has a food intolerance to meat – symptoms similar to IBS if I remember rightly. If so, that isn’t terribly rational. But that’s still guesswork on my part.

          • BBA says:

            To me it looks more like “trolling.”

      • thirqual says:

        “Also, if they [experts and scientists] are using a model, I pretty much discount everything I hear. But if they are just looking at data like a scientist and saying, “When this happens, that happens,” then I’m going to put more stock in it.”

        Interview of Scott Adams by Julia Galef for CFAR. This went unchallenged. Update about Scott Adams and CFAR as needed.

        • 27chaos says:

          Primitive anti-Euler defenses. Not an ideal, but CFAR is kinda about mainstream rationality. Also, interviewing is hard. Don’t update too far.

          • thirqual says:

            And mainstream rationality is separated from basic knowledge about how science works since when ? It is, at best, terrible PR.

        • Vaniver says:

          Part of the context (Julia summarizing Scott’s position in his book):

          And about looking for overlapping sources of evidence — not relying solely on your own personal experience, but cross-checking it with expert advice or scientific studies.

          Another part of the context (Scott’s response to a followup question):

          The jargon, buzzwords, ideas about business management and success — they all seem to have the same quality. Which is: whoever first came up with the idea looked at some places it made perfect sense, and then imagined therefore it could be generalized to other completely different situations. That tends to fail almost all the time.

          And then Adams mentions the reversal test–but not by name.

          It seems to me that this is related to an example of the dynamic discussed in Intellectual Hipsters and Meta-contrarianism, where it’s often difficult to tell the difference between one level below and one level above.

          Many scientific models are really dumb, and scientists present and use them without realizing it. One of my friends who was a physics graduate student once attended a candidate talk for a faculty position. Part of the speaker’s work had involved estimating the flow rate of fluids through small tubes, and they graphed the predictions of their model; my friend noticed that the x axis on the graph extended well below the size of a hydrogen atom, at which point the assumptions on which their model rested were clearly inappropriate. He pointed this out, and that candidate ended up not getting hired.

          That is, suppose Adams’s inner state is something like “only trust people with causal models, don’t trust people with statistical models,” but he doesn’t have the vocabulary to express that clearly–the best he can do for causal models is “looking at data like a scientist.” Similarly, someone familiar with the problems of extrapolation and limited assumption ranges could generate a very similar statement. But so could someone who just doesn’t believe that the world is a knowable place. So it’s potentially ambiguous which state he’s in.

          But it also seems to me that it dramatically understates the difficulties in communication to resolve that uncertainty to “clearly Adams is an idiot, and so is Julia for not noticing.” That’s what follow-up questions are for.

          • thirqual says:

            Scientists are people, people are dumb, news at 11 (also, all models are false). This is not relevant on whether models are good or not, even if you decide to distrust “only” statistical models. What you say about extrapolation and assumption ranges applies equally to both types of model.

            See above other commenters mentioning other interesting opinions held by Adams, for the ambiguity.

            About Julia, and CFAR in general, I give them the benefit of the doubt. But they appear to care a lot more about internet celebrities than about correct beliefs on building knowledge (or PR towards scientists, but I understand that).

    • shemtealeaf says:

      I’m fairly certain that I’ve run across a clip very much like what Adams is describing on one of the ‘porn tube’ sites. I didn’t make the connection to hypnotism, but it was something vaguely in the vein of ‘jerk off instruction’, with instructions to send the girl money mixed in. The part I watched seemed like it was probably part of a longer video, and the whole thing was super weird.

      I might have it bookmarked somewhere; I’ll see if I can find the link when I get home from work.

    • jimmy says:

      I do. Hypnotism is weird.

      On the one hand, a lot of people (hypnotists and subjects included) buy into the “super powerful mind control” thing more than is realistic. Some hypnotists will even know they’re exaggerating and do it for the effect anyway – and the fun of feeling “powerful”.

      On the other, it *can* be super powerful mind control – especially if you go into it with the wrong mindset. I highly recommend staying the hell away from the “erotic hypnosis” scene. People really get into *unbelievably* fucked up situations – like getting hypnotized to meet the guy in person and get sexually assaulted, only to not realize what is going on until months later when the evil bastard accidentally says something that she *notices* is creepy – and then taking a month or so with a different hypnotist to undo all the suggestions that lead her back to him.

  16. onyomi says:

    Wow, I would not want to be governor of North Dakota. Kentucky on the other hand… Also, Arizona’s is cute, if small.

  17. bluto says:

    I’m disappointed the Schwab test didn’t try, “Sell in May and go away, but remember to return in November.”

  18. onyomi says:

    Another anecdote regarding the high carb thing.

    I was recently talking to a Chinese friend who says her mother always sends her back to America with a large care package of chrysanthemum tea. This, she claimed, was for the “cooling” effect this tea has on the body in the Chinese conception. The reason the mother feels the daughter needs lots of “cooling” when living in the US is because Americans eat “so much meat,” and meat causes “heat” in the Chinese conception (which is interesting, given I’ve seen some Vegan-friendly studies claiming that all animal products tend to increase inflammation).

    Now this is a country where people cook everything in lard and you find tiny bits of pork in your cake. And THEY think Americans eat a lot of meat.

    When you compare the diets of Americans to the diets of people in places where people aren’t fat, overwhelmingly the most salient aspect of our diet is the large quantity of meat we eat, not the large amount of carbs, or even the large amount of sugar. We just eat way more meat than most people (we think a large piece of meat, like a steak, is a dish in its own right, whereas most other cultures treat meat as more of a topping or filling for some staple like rice, corn, pasta, etc.) and we also happen to be way fatter than most people.

    And what is the dietary advice that’s most fashionable today in the US? Eat more meat.

    • kerani says:

      I think your data on American meat consumption is perhaps not in date. Overall, American obesity has tracked with increased simple carbs, not meat consumption.

      (Not saying there is a cause-effect here, because that evidence isn’t very clear so far.)

      • onyomi says:

        I recently away from a part of the US where trendy, health conscious people are eating less meat. It’s not uncommon there to meet vegans and even raw vegans. And people there are relatively skinny.

        I recently moved to a part of the US where I seriously doubt meat consumption has dropped at all. A huge pile of smoked pork and sausage plus a little potato salad and slaw is concerned a very good meal. There are hardly any vegetarians or vegans. And people here are much fatter.

        The article may point to a recent trend in trendy places, but I still think that the most salient feature of the typical American diet is having a lot more meat than average. Not saying it’s the *only* factor, but it still seems bizarre to me look at Americans, who are fat and eat a lot of meat compared to people who are thin, and then say “well, I guess we need more meat!”

        • keranih says:

          I still think that the most salient feature of the typical American diet is having a lot more meat than average.

          It’s an interesting theory, yes. Just not upheld by the data.

          Again, I suggest that what you are seeing is one part of a broad cultural difference between two locations, which goes far beyond the contents of the diet.

          You might want to think about what you mean by “trendy, health conscious people” and why you might not realize if any of the people you are meeting now are vegans.

          (I have to admit that my pov on the (Southern) American meat consumption level is heavily influenced by time spent in Argentina, where everything else – including water and tea – is just a side dish to the main meat course.)

          • onyomi says:

            I think it is, broadly, upheld by the data:

            http://chartsbin.com/view/12730

            http://static4.businessinsider.com/image/54c127cfeab8ea447d9135d5-1128-515/screen%20shot%202015-01-22%20at%2010.27.46%20am.png

            The countries with the highest meat consumption are generally the fattest. Of course, it may be that meat consumption is just a proxy for wealth, in which case it may be that rich people eat meat and are fat, but eating meat doesn’t make you fat.

            I know the Southerners around me right now are not vegans because I see everyone eating meat all the time.

            That said, I also see them eating a lot of other fattening things. I went to a sushi restaurant here recently and was unpleasantly surprised to find that virtually all the sushi had some form of fried tempura batter mixed into it. And of course the patrons were, by and large, fat. So it also seems to be that Americans just have pounded their tastebuds so hard that they seem to, on average, want only the sweetest, meatiest, greasiest possible food all the time.

            Related, some obviously left-wing scholars have even found a way to blame this on “neoliberalism”: http://www.ncbi.nlm.nih.gov/pubmed/26282708

            Re. Argentina: I’ve never been there, but I have a vague impression that the meat they eat tends to be pretty lean? Maybe the culprit is saturate fat and not meat per se?

    • Ever An Anon says:

      The “cooling” and “heat” stuff is just Humorism. For whatever reason it’s actually quite popular in China right now, and oddly they think it’s a traditional Chinese idea* to the point that my girlfriend was shocked that I already knew about it.

      TCM is pretty silly when you come down to it. The specific advice can be good, especially if it’s actually traditional and not just TraditionalTM, but the explanations are ludicrous at best.

      *Despite the fact that it doesn’t work with the Chinese element scheme, only the Indo-European one. Galen’s humors don’t have any way to include “wood” or “metal” after all.

      • onyomi says:

        I’m not saying I buy TCM in general, but it makes some interesting and weird connections to actual physiology at times. That is, I wouldn’t be surprised if there was a general correlation between things they consider “heating” and things we later find out are pro-inflammatory.

        • HeelBearCub says:

          @onyomi:
          I think we could summarize your idea as map/territory?

          IOW, the territory is inflamation, the map is “heat” and then people use the map to make predictions about all new territory (and those ideas are usually wrong) …

          • onyomi says:

            That may be a useful way of thinking about it, and, indeed, of thinking of all medicine prior to the microscope, perhaps. What is maybe most interesting is the possibility, and, imo, probability, that there are cases where the Chinese (or Ayurvedic, or…) map may hint at aspects of the territory we haven’t yet explored.

            Of course, because the TCM map is based largely on gross-level observation and trial-and-error, there are bound to be many places where it is just wrong about the territory.

      • houseboatonstyx says:

        @ Ever An Anon
        *Despite the fact that it doesn’t work with the Chinese element scheme, only the Indo-European one. Galen’s humors don’t have any way to include “wood” or “metal” after all.

        Might take a look at the Indian Ayurvedic system, where different foods increase one of their three ‘humors’: pitta (heat, ruddiness), kapha (cool, heavy, wet, steady cf phlegmatic), or vata (air element, flighty cf melancholy). Perhaps Bodhidharma who famously carried the Dharma from India to China, brought the pitta along with it. (Carrying the Dharma to convert another nation, sounds like a pitta thing to do, at least if done successfully.)

        Kapha combines earth and still water, known to us as mud. It might be fun to check some dates and speculate whether Kapha lost one of Galen’s four humors or Galen added one to Kapha’s three.
        https://en.wikipedia.org/wiki/Bodhidharma
        Wikipedia
        Bodhidharma. Bodhidharma was a Buddhist monk who lived during the 5th or 6th century CE. He is traditionally credited as the transmitter of Chan Buddhism to China, and regarded as its first Chinese patriarch.

        • Ever An Anon says:

          See, that’s a humor system that looks like it works with the Gunas (or Paracelsus’s Three Primes if Europeans had ever gotten hold of Kapha’s humors) but it still wouldn’t play nicely with the five Chinese elements nor give the hot/cold wet/dry square.

    • Shenpen says:

      France considers meat a meal as well. They are not fat.

      Also, they don’t cook it properly. When you eat a duck that could be best described as “rare medium”… that is not a good day.Chewy chewy chew chew.

  19. Logan says:

    The article about Euclid’s elements is fascinating. I’ve seen lots of people claiming that the proofs are flawed, but never a thorough demonstration of the fact. I’ll have to resort that false proof to memory.

    But I have to contradict the claim at the end of the article, that working mathematicians assume the Elements are the pinnacle of good proving. That’s certainly not my experience. Elements was one of my favorite books in high school, and every time I praise it every mathematician in the room reminds me that the proofs are no good and I shouldn’t waste my time. Anyways, if a mathematician who had actually read some of the proofs mistook them for rigorous, I’d seriously question by what virtue he adopts the title “mathematician.”

    • Douglas Knight says:

      The proofs are a lot more rigorous than in modern geometry. I think you’re confused about what mathematics is.

      While I’m on the subject, the isosceles article complains that the definitions in the Elements are both meaningless and never used. Russo proposes that they were added by Hero 400 years later. He argues at great length against the last sentence of the article.

      • Logan says:

        Would you mind elaborating on the first part? Which parts of modern geometry? What do you think mathematics is?

        • tgb says:

          I’m interested in this too. I’m a graduate student in geometry and am not sure what Douglas Knight is saying. My best guess is something like “Modern geometry (like most fields of math these days) doesn’t eschew intuition for formal proofs. So when we critique Euclid for depending on intuition, we should know that that critique also applies to modern math. Since we take modern math to be acceptably rigorous and error free, we must also take Euclid to be so.”

          • Douglas Knight says:

            Geometry is sloppier than other fields. It relies more on diagrams, just like Euclid. For example, Smale’s theory of handle moves or most arguments about the mapping class group.

      • malpollyon says:

        Euclid’s very first theorem fails. Geometry over the rational numbers satisfies Euclid’s axioms but doesn’t admit equilateral triangles. If you call that more rigorous than modern geometry I don’t know what to say to you.

        • haishan says:

          I mean, it’s true that Euclid did a pretty bad job of formalizing his axioms. But it’s not like working mathematicians painstakingly construct proofs step-by-step in Coq; they use at least as much intuition as Euclid did, they’re just better at going back and justifying it rigorously.

        • Rinderteufel says:

          What do you mean by: “doesn’t admit equilateral triangles”? surely there are equilateral triangles where all sides have rational length?

          • suntzuanime says:

            There are no equilateral triangles where all vertices have rational coordinates, I don’t think. The altitude of an equilateral triangle is an irrational multiple of the base.

          • Douglas Knight says:

            Euclid’s first theorem is that for any line segment, there is an equilateral triangle on that base. STA’s argument shows that this is fails for the rational numbers for the segment (0,0) to (0,1). But in fact the rational plane fails more badly and does not admit any equilateral triangles. (Compare to 3d. There are some rational equilateral triangles, like (1,0,0),(0,1,0),(0,0,1), but there is still no equilateral triangle with vertices (0,0,0) and (0,0,1).)

          • Does Euclid ever made any allusion to a coordinate system? Does he ever imply that all points must be describable by rational numbers in a rectangular coordinate system?

            Without evidence to the contrary, I would guess that Euclid uses the same intuitive definition of a plane that most people have, as a continuous field of infinitely precise points. That the field contains mostly irrational numbers is an inference that can be made from Euclidean geometry (and a proof of this was known to the Greeks), but it’s not a refutation of Euclid.

          • Rinderteufel says:

            Well I think I see the problem of the missing intersection of the circles in Q^2. But Euclid fifth axiom also seems to be invalid in Q^2 ( using the same points). I feel this might be a conflict between our modern, abstract definition of “line” and “circle” and a definition that might be more natural for someone who does a lot of circle&straightedge geometry in the sand. Intuitively, a line in Q^2 does not look like a “line” at all

          • Andrew G. says:

            @ Mai La Dreapta:

            I would guess that Euclid uses the same intuitive definition of a plane that most people have, as a continuous field of infinitely precise points.

            That’s exactly what he did, but that’s also exactly the point – it was an intuitive assumption, not an axiom or a theorem, hence not rigorous.

            The existence of a model of the axioms (the rational plane, even though this wasn’t Euclid’s intended model) in which the theorems are false proves that the theorems are not a logical consequence of the axioms.

            And I personally don’t think it’s a coincidence that the problems that so vexed the ancients – squaring the circle and so on – hinge on the question of which of those points in the “intuitively” continuous plane can actually be reached by construction.

            (I once got into an argument online with an angle-trisector who had made a very simple mistake – assuming that a condition that held at point A and at point B also held for intermediate points of AB – but refused to accept either that he’d failed to prove the fact, or that it was in fact false – which was easily demonstrated with trig, but the guy didn’t accept trig as valid.)

  20. Skef says:

    Re: Modafinil. A number of people in my larger social group were experimenting with continuous use years ago “before it was cool”. Those experiences generated at least anecdotal evidence that it makes some people into assholes, or more asshole-ish. Of course, one isn’t like to be bothered by that side effect in that state. And perhaps we live in an era where that’s a feature.

    Would these “not many side effects” studies be able to pick up on that?

    • Nita says:

      I tend to be terse and less friendly when I’m focused on work (no modafinil involved).

    • nope says:

      I was experimenting with ADHD-type treatments (for ADHD) a while back and tried modafinil. I was definitely more aggressive and irritable on it than the mainline stimulant treatments, but I also did a somewhat high dose. But yeah, stimulants, especially at too-high doses, definitely cause irritability for many people. It’s not usually something people think to report or connect to the drug.

      • John Schilling says:

        Alternately, sleep deprivation causes irritability in people, and neither modafinil nor mainline stimulants do anything to counter this.

        Sleep is complicated; last time I checked, nobody really had more than handwavey explanations of why it’s even necessary. The odds that we’ve stumbled on to a drug that is uniformly effective at reducing all of the adverse consequences of sleep deprivation, without even knowing what we’re really trying to do, are small.

        And even if avoiding sleep isn’t the reason you’re experimenting with modafinil or amphetamines, it’s likely to be an unintended consequence.

        • nope says:

          Perhaps for others, but for me it was not an effect connected to sleep. Taken at normal therapeutic times, none of the stimulants I’ve taken have had any effect on when or how long I sleep, and I uniformly sleep 8+ hours a night. I’m pretty confident in the modafinil-irritability and general high-stimulant-dose-irritability connection because the baseline conditions were essentially the same for all of them, and I was only irritable while the drugs were in effect, not before or after.

          There’s a warning on both ritalin and amphetamine about their potential to induce mania, and irritability is a very common symptom of mania. I expect as clinical use of modafinil becomes more widespread that it will soon earn the same warning.

    • speedwell says:

      My mother tried modafinil for narcolepsy. It worked, and she was objectively bitchier. When I talked to her about it, she said she realized she was like that, but she attributed it to being “just more sensitive to things”, which I suppose made sense, if before the medication she had been unable to focus properly.

  21. Cerebral Paul Z. says:

    Uncle Eleven’s favorite actress: 54 Tyler.

  22. Anthony says:

    California has a very nice-looking Governor’s Mansion that hasn’t been used since 1967, when Nancy Reagan told Ronnie “I’m not putting my kids in that firetrap”. The mansion stil hasn’t been fully renovated. Jerry Brown lives in an apartment nearby that’s somewhat less ascetic than the one he used in his first two terms, when he doesn’t commute home to Oakland.

    When Arkansas renovated it’s Governor’s Mansion, they got something more in keeping with traditional Arkansas architecture for Huckabee: a triple-wide.

    • 27chaos says:

      Mathematicians are trolling us all, gotta be. It’s a conspiracy, and I think the physicists may be in on it too.

  23. Thecommexokid says:

    There is a surprising mismatch between the various times in this list you say “Finally” and the actual final item in the list.

    • Scott Alexander says:

      I’m using “finally” to mean “at last!”, not “this is the last item on the list”

  24. John Schilling says:

    100,000 Hours: OK, why is it a joke? Granted, it’s the sort of proposal where if you don’t immediately say “it’s a joke”, a hundred thousand SJWs will descend upon you with the burning rage of a thousand suns and leave only a smoldering wasteland where once your hopes and dreams were planted. But, for a non-trivial set of would-be altruists, isn’t this likely to be the most effective variant of “earning to give”?

    • Dude Man says:

      The website is a parody of 80,000 hours. Marrying to give is their way of making fun of earning to give; both may (or may not, I’m not taking sides) be the best way to give effectively, but no one will buy that that’s why you’re doing it.

    • Nita says:

      a hundred thousand SJWs will descend upon you with the burning rage of a thousand suns

      Nah, it’s a much better match for redpill talking points: beware the evil hypergamous harpies, looking to suck the lifeblood out of you and pour it into dirty African kids (to grow the supply of big black cock, no doubt)!

    • Trevor says:

      The joke is excellent satire. Satire can be funny, to some people, sometimes.

      Now, if you’ll excuse me, I really must get back to working on my modest proposal to end child poverty.

  25. cewr says:

    Scott Adams is not willing to bet on his prediction, saying “betting is illegal” when people in the comments proposed betting with him.

    • Scott Alexander says:

      THAT’S WHY WE HAVE BITCOINS.

      • roystgnr says:

        Okay, but then how sure are you he’s not betting? Using bitcoins for illegal gambling, then announcing it on his blog, would seem to defeat the purpose of using the bitcoins to hide the transactions.

      • Luke Somers says:

        AND QUATLOOS

      • Randy M says:

        We have BITCOINS in order to circumvent laws? I thought… well, that’s pretty much what I though, but I thought we didn’t say so in all caps.

        • HeelBearCub says:

          The first rule of Bitcoin club is that you encode your meaning with asymmetric cryptography.

    • brad says:

      As I understand it, it is perfectly legal to get on a plane, go to London, and place a bet on the winner of the next Presidential election. With bookies rating Trump’s chances at 7-12 to 1, if I thought the chances were 98% that Trump would win, I’d be maxing out my credit cards and on the next plane.

    • Adam Casey says:

      I’ve offered a bet under UK law, he still declined. Which I was shocked by, shocked I tell you.

      • Jiro says:

        If you have doubts that something is legal, you would be a fool to trust a random guy on the Internet telling you ‘trust me, it’s legal”. The rational response to “I determined it’s legal, so you should bet me” is “go away unless you want to pay for my lawyer to examine it”.

        • Brad says:

          Given the expected value of the bet assuming a 98% probability for Trump, you’d have to be really confident there’s no legal way to bet to make refusing to invest a few hundred in legal bills rational. That or almost pathologically risk adverse.

          • Jiro says:

            You’re computing the wrong expected value. The expected value is not 98% * return – 2% * bet – legal bills. The expected value is (% chance it’s legal) * (98% * return – 2% * bet – legal bills) – (% chance it’s illegal * legal bills).

            In other words, if it’s illegal, and he determines so by paying the lawyer, he still lost money on paying the lawyer and he doesn’t gain from making the bet. That affects the overall expectation of (pay lawyer + make bet), and if he thinks it has a large chance of being illegal, the overall expectation is correspondingly low, and he should rationally refuse to bet.

          • brad says:

            No, I wasn’t computing the wrong expected value. Multiply that out with 10:1 odds being offered in London, $500 legal bill (two hours by a low level associate), and an 80% chance there’s no legal way to do it. You still have a positive expected value if you bet $3000, and the profit shoots up from there.

          • Jiro says:

            brad: That works for the specific numbers you made upo, but it doesn’t work for all possible numbers. If he thinks there’s a 98% chance of the bet being illegal, that’s a 98% chance of losing $500, a n 0.04% chance of losing $3500, and almost a 2% chance of gaining lots of profit. Many people would not take that bet even if the profit is $30000, and that assumes they even have $3500 that they don’t really need at the moment.

          • brad says:

            A little bit of googling and some common sense (think about cruise ships) should get you to more than 2% confidence that you can fly to London and gamble there without being arrested upon your return to the United States.

            I’ll grant he would need to have at least several thousand dollars at hand to bet, but we are talking about a famous and widely syndicated cartoonist and author.

            All in all, I’d say we are looking at someone that: 1) is incredibly risk adverse, or 2) is morally or otherwise opposed to gambling, or 3) doesn’t *really* believe the chances of Donald Trump being the next President are 98%.

            Edit: I guess another possibility is that another hundred thousand dollars or two (probably the practical limit of what action you could get) wouldn’t make enough of a difference to be worth the effort.

          • Jiro says:

            If the “legal way to bet” consists of flying to London, you need to add in the cost of flying to London, which you didn’t. Furthermore, flying to London has a cost in time–and the opportunity cost of time goes up with his wealth, so the fact that he’s a nationally syndicated author actually makes it worse. The cost in time may be even worse if he has to get a passport or otherwise prepare for the trip.

            And I would imagine that a big reason for not taking the bet is that most people have a policy of not doing things that require large investments regardless of whether a utilitarian calculation shows that the large investment is likely to benefit them. Even if betting would be beneficial to him, all that his refusal to bet may show is that like most human beings, he’s not a perfect utilitarian. “Shut up and multiply” is practiced by LessWrong, but not by most nationally syndicated authors. (And given epistemic learned helplessness, having such a heuristic is generally wise. He doesn’t know that the plan might not have flaws in it unrelated to whether the 98% figure is accurate.)

  26. whateverfor says:

    I think you’re misunderstanding the Scott Adams post a bit. That 98% figure is total BS (intentionally so). In an earlier post Adams talks about how Trump puts out ludicrous figures not because he’s stupid but for anchoring purposes.

    “The $10 billion estimate Trump uses for his own net worth is also an “anchor” in your mind. That’s another classic negotiation/persuasion method. I remember the $10 billion estimate because it is big and round and a bit outrageous. And he keeps repeating it because repetition is persuasion too.

    I don’t remember the smaller estimates of Trump’s wealth that critics provided. But I certainly remember the $10 billion estimate from Trump himself. Thanks to this disparity in my memory, my mind automatically floats toward Trump’s anchor of $10 billion being my reality. That is classic persuasion. And I would be amazed if any of this is an accident. Remember, Trump literally wrote the book on this stuff.”

    When you have a number, even if you suspect it’s faulty, the natural tendency is to start from there and try to adjust down. This leaves you vulnerable to an attack where the attacker just inflates the number even more, so you adjust it down to their target instead of reality. I’d bet every penny I own at 5:1 odds, let alone 50:1, but that wasn’t the point of that blog post.

    • Scott Alexander says:

      I don’t think your theory means I’m misunderstanding it, I think your theory is that he’s being dishonest and I’m falling for it. But fair enough.

      • meyerkev248 says:

        Which is the point.

        It’s an old Nixon Trick. Everytime he got numbers, he’d lie about what the numbers said. Which meant that he got 2 newspaper articles instead of 1.

        Nixon said 98% of all left-handed businessmen support him.
        Nixon Lied. It’s only 94%.

        /Except of course, that in this case, we have 0% ability to get at the 94% number, so it’s not terribly effective.

      • whateverfor says:

        Well, I don’t think Adams is being just straight dishonest, I think he’s actually being incredibly clever. Note that the second part of the post is about how the tricks of hypnosis work even when you’re expecting them. He explains this trick in an earlier post, then does it here and it still works. Strategic self-aggrandizement, ridiculous claims for anchoring, the way it attracts attention through attacks in a way that draws attention to itself and not the thing being attacked: these are all classic Trump.

        Adams thesis is that the things Trump says aren’t idiot ramblings, but effective strategic maneuvers, so he’s a much stronger candidate than people give him credit for. Instead of just explaining it though, he demonstrates the techniques, so you get something that looks just dumb (like much of what Trump says), but still works because it’s attacking on a level you aren’t expecting (like Adams believe Trump does). And if the end result works, it’s like a recursive proof.

        Or maybe I’m giving Adams way too much credit and he just likes his fellow blowhards, but this explanation is more fun.

        • Douglas Knight says:

          What is Adams’s goal? What does he accomplish by anchoring us to 98%? Why does he care?

          One goal is that that he wants publicity. He succeeded in making Scott to link to him. But he accomplished that by the simple strategy of giving a ridiculous number.

          • bluto says:

            When someone with too much time on his hands says nuhhuh it’s 48.47% based on my model with a zillion factors, Adams just got most of the audience to update their priors. Adams does no work and has a bulletproof defense (the author probably hates Trump or he’d have never taken several hours listing why Adams prediction is wrong in detail) so there’s no reason to think the estimate is low.

            For example, Trump takes 10 seconds to say I’m worth $10 bn, some reporter spends months researching to say nuhhuh he’s worth $250 million at best, and now everyone listening knows Trump was worth no less than $250 million a decade ago (why would the reporter inflate the estimate), for 10 seconds of effort and the cost of a lawsuit (so almost certainly far more today). If one has no shame, that’s a ton of free labor.

          • Max says:

            1) He gets free publicity
            2) He gets huge bonus points if prediction correct
            3) ???
            4) In case he doesn’t he loses nothing

        • John Schilling says:

          Scott Adams has cleverly convinced me that, rather than being a clever guy who might be generally worth paying attention to, he’s a one-trick pony with a comic strip that’s still kind of fun if you don’t take it too seriously.

          • moridinamael says:

            I am “anchored” on having read The Dilbert Principle when I was a child and finding that the last couple of chapters are an attempt to connect the power of affirmative thinking to Quantum Mechanical Weirdness.

        • Scott Alexander says:

          If it’s an evil plot, it’s kind of a silly one. He has us slightly more convinced of a Trump victory, and in exchange, if Trump loses Adams will lose a lot of credibility as a prognosticator. He gets no advantage from the first, but loses a lot from the second.

          • whateverfor says:

            Evil plot is a strong way to phrase it, it’s a rhetorical tactic to get us to believe something he actually believes.

            In our little community saying odds like that has a very specific meaning that implies a certain process and a very strong standard for honesty. Someone saying 98% is held to different standards than someone saying “Very Very Very Sure”. In the broader world that isn’t the case, these kinds of odds are seen as open to rhetoric as anything else.

          • Adam says:

            What I find interesting is that Adams, like Trump, doesn’t care about his credibility. As he says many times, he’s in the entertainment business, his blog is meant for entertainment, not actual prediction. Even when he tackles something slightly important like his “rationality engines”, in the end, it’s all for fun.

            To him, loss of credibility means nothing. He will keep doing whatever it is because it’s fun. That gives him staying power because he loses nothing for being wrong. He can be wrong 1,000 times to 1 and it won’t matter much.

            It a frustrating position for the rationality community because we value accuracy. There’s just no incentive for Adams to be accurate.

          • I rather assumed the point was to encourage people to vote for Trump. Last elections, if I remember rightly, he was busy discouraging people from voting for Obama. Presumably he just wants a Republican president.

          • John Schilling says:

            If he wants a Republican president, why would he stump for a Republican candidate who has maybe a 2% chance of winning the general election?

          • Well, you’ve got me there. But perhaps he honestly considers Trump to be the most likely of the available candidates to be able to win? (That’s a pretty horrible thought, to my mind, but as a foreigner I’m not really in a position to judge the merits of the proposition.)

      • ryan says:

        From the context I think it’s clear he’s not lying:

        If I had to put a number on my prediction, I would say a 98% chance of Trump winning the whole thing. That is the direct opposite of Silver’s prediction.

        Nate Silver is far smarter than I am on this sort of topic. He’s considered the gold standard for predicting stuff that people don’t think is predictable. If you had to choose sides on the Trump predictions, the smart money is on Silver.

        That said, Silver’s predictions are necessarily based on past patterns. My predictions are based on my unique view into Trump’s toolbox of persuasion. I believe those tools are invisible to almost everyone but trained hypnotists and people that study the science of persuasion.

        What I see from my perspective as a trained hypnotist is that Trump brought a flame thrower to a stick fight.

        Since the beginning of time, every winner of every stick fight was a guy with a stick. So you’d expect that trend to continue. Until someone shows up to the fight with a flame thrower.

        I’m betting on the guy with the flame thrower. Silver is betting Trump will set himself on fire with that flame thrower, or some candidate with a stick will get lucky before now and election day. That’s what always happened before.

        But I say this isn’t Trump’s first fight using a flame thrower. I wouldn’t count on him forgetting where the trigger is.

        He plainly states he picked 98% because that plus Silver’s 2% adds up to 100. Then he changes the usual knife to a gunfight image into flamethrower to a stick fight. If someone says I’m so hungry I could eat a horse they’re not lying. Adams may not be using conventional idioms, but nothing he’s saying is dishonest. It’s just run of the mill hyperbole.

    • Saint_Fiasco says:

      I wonder what Scott Adams gains by anchoring his readers (or himself) to that number.

      • 578493 says:

        A warm smug feeling, at the very least. From what I’ve seen, this kind of trick is very much in keeping with his brand of ostentatious (pseudo-)cleverness.

      • ryan says:

        I don’t think it’s that really. I imagine this conversation:

        You all are the real boobs here. Donalt Trump is a genius and he’s going to win the election.

        What specific odds do you give him?

        Didn’t you hear me? I said he’s going to win.

        No, no, you can’t do that. Zero and one are not probabilities. You have to be like Nate Silver and compute an actual percentage.

        Ugh, fine, Nate says 2%? I’ll go with 98% then.

    • Scott Adams says:

      For the rational among us, I would like to remind you that no one can predict the future.

      Most predictions are “straight line” predictions assuming things stay the same. And nothing ever stays the same in the long run. There are always surprises.

      But what I see is a chessboard set-up in which Gary Kasparov wins nearly every time.

      Trump will almost certainly win the Republican nomination. That part of the prediction seemed crazy when I first made it. Now many people have trouble imagining anyone else getting nominated.

      Now imagine Trump telling the world that Clinton is a “security risk,” as he started doing today. That is a lot different from “email problem.”

      That’s a linguistic kill shot. Same way he killed Bush with the “low-energy” label.

      The trick of the linguistic kill shot is to invent a label that is not overused. No one ever called another candidate low-energy. No one ever called a presidential candidate a security risk. These are fresh fields where Trump dumps the victims.

      If Clinton does not succeed in limping to the STARTING line, that leaves you Bernie Sanders and Joe Biden as your best hope, running against Godzilla-with-a-hard-on. Does that look like a fair fight to you?

      2% chance of a Democratic victory seems generous under the “straight line” prediction. But of course, life is unpredictable by nature and shit happens.

      But the chess board is completely set for a landslide Trump win. Something NEW has to happen to stop that. And new things do happen.

      • Saint_Fiasco says:

        You should consider the probability that something new will happen, multiplied by the probability that the thing that happens is bad for Trump.

        The US has a very long election cycle, so keeping in mind that new things do happen, 98% confidence seems too high for a prediction, no matter for which candidate.

      • Scott Alexander says:

        Are you familiar with Robin Hanson’s theory of Inside View predictions versus Outside View predictions?

        Inside View is when you try to think specific arguments for why something would be true, for example your linguistic kill-shot argument for why Trump should win.

        Outside View is when you refuse to do that, deliberately dismiss all your specific evidence, and operate only based on numbers, models, and past experience.

        There’s a lot of Outside View evidence that Trump won’t win. Nate Silver’s 2% number that you bring up is almost entirely Outside View and based on how this sort of thing has worked in previous elections. So is the argument I’m sure you’ve seen where people talk about how Herman Cain and Newt Gingrich were on top for a while, which is something like “candidates without establishment support often peak early in the primary, but don’t make it to the end.” So is going off the prediction markets, which give Trump something like a 10% chance. So is going off the polls, which show Hillary winning a Trump-Clinton matchup. So is just conditioning on the chance that a given candidate ahead in the polls in August before an election year will win, which is probably high but nowhere near 98%.

        My philosophy is that Outside View usually beats Inside View, even when Inside View sounds really convincing from the inside. I don’t expect to convince you of it now, but I want to state it clearly so that if I turn out to be right I’ll have my reasoning on the record beforehand.

        Related: On Overconfidence. Do you think you could predict fifty elections this difficult and only be wrong once?

        • Vaniver says:

          Do you think you could predict fifty elections this difficult and only be wrong once?

          Well, part of the issue here is the reference class. I don’t think I can predict who will win 50 wars, but I think I can put 98% odds on the first side to use a nuclear weapon winning that war. (This prediction, coming 70 years after the fact, is not very impressive.) So I’d say the reference class is “judging the importance of improvements in military technology,” which I might feel like I could do about 50 times without getting it wrong more than once.

          And that’s Adams’s point: this election isn’t difficult to predict, if you’re looking at it from his perspective. I agree with you that one should take an inside view confidence of 98% and turn it into a total confidence of, say, 75% based on outside view grounds, but this does seem to me like an area where Adams, at least, should put a lot of confidence on his inside view understanding.

          (Conflict of interest: I’ve bet that Trump will win, and expect to do so again.)

          • Douglas Knight says:

            What odds did you bet at? Better or worse than ipredict?

          • Steven says:

            The wheels will come off Trump’s candidacy with his failure to win the Iowa caucuses.

          • Vaniver says:

            Douglas: I think it’s more likely than ipredict does, and a cursory investigation of their fees suggests that I do think there’s free money on the table. Time to set up an account.

      • Deiseach says:

        Unless the entire public of the U.S.A. is batshit crazy, they should vote for a rock in preference to Donald Trump.

        What am I saying – this is America we’re talking about 🙁

        I can see Trump’s success so far as a combination of people who really do think “can make money = can do anything”, people who share his views (or what he says in public) about immigrants etc. and people using this opportunity to put two fingers up to the establishment in both their parties (Republican and Democrat) by using him as a protest vote

        But if I thought he had a genuine chance of winning the nomination, I would seriously have to ask what is wrong with you people? At least in my country, we have enough political parties that it’s understandable a loon can run and make a showing, but you’ve only got a two-horse race!

        EDITED: People who live in glass houses shouldn’t throw stones, and after our Minister for Finance indulged in the most cringe-worthy show of arse-licking when Trump visited here to see the golf course he bought, I certainly have no place to stand and cast aspersions on American voters.

        • stillnotking says:

          The perennially dominant speech mode of the American electorate is “throw the bums out”. The perennially dominant voting mode of the American electorate is “at least our bums are better than theirs”.

          Trump would not win the general election. Even though Clinton is a mediocre politician at best, she would get 55% of the vote merely by being a Democrat who has never hosted a reality TV show. Since Trump’s supporters are aware of this, they will regretfully pick a bum instead. That will be a close race, and one the GOP will most likely win.

          You’re right that Trump is essentially a protest vote. When was the last time an American election was swung by protest votes?

          • brad says:

            >> When was the last time an American election was swung by protest votes?

            Arguably 2000 with the Nader voters swinging the election to GWB. Almost certainly in 1992 with the Ross Perot voters swinging the election to Clinton.

          • stillnotking says:

            Not those old chestnuts. Perot did not cost Bush the election; over half of his voters didn’t vote in 1988, and absent a credible third-party candidacy, likely wouldn’t have voted in 1992 either. Even if they’d discovered a sudden, inexplicable passion for politics, it’s astronomically improbable they’d have broken for Bush by the ~3:1 margin it would have taken him to gain 100 electoral votes on Clinton. Exit polls showed them to be about evenly split on their second choice.

            The 2000 election was so close that one could identify a thousand “swing factors”. I remember reading that the weather on election day probably made more of a difference than Nader’s candidacy, although I can’t find a source for that. Again, it’s really, really unlikely that enough of his voters would have picked Gore over staying home.

          • HeelBearCub says:

            @stillnotking:

            The 2000 election was so close that one could identify a thousand “swing factors”.

            This seems slightly unfair. You seem to be saying that third party candidates only count as “swinging” an election if they draw a large number of votes from only one candidate, and even then it doesn’t count if the election was close in other ways.

            Bush defeated Al Gore by 537 votes in Florida. Nader received 97,421 votes.

            That doesn’t seem like a hoary old chestnut of a myth about swinging elections. That seems like fairly incontrovertible example of a third party candidate making the difference.

          • stillnotking says:

            No, I’m saying it looks like motivated reasoning to say that Nader swung the election, rather than the thousand other things that made at least the same difference in votes. One could say with equal justification that the SNL “lockbox” sketch swung the election, or the retirement patterns of rich, elderly Yankees, or the election being held in November rather than June. It is a trivially true statement at best, and I’m not even sure it is true. I knew some Nader voters, and none of them would have voted for Al Gore under any circumstances — recall that 2000 was before he became a famous darling of the left for his environmental activism. He was seen as just another machine politician who happened to have a D next to his name.

            I would count a third-party candidate as swinging an election if he stole a large and unambiguous quantity of votes from one of the parties. Nader was small, and Perot was ambiguous.

          • HeelBearCub says:

            @stillnotking:

            But now you arguing about definitions of “swing”, which is different than something being a myth based on falsity.

            I agree with you about Perot, the counterfactual if Perot isn’t in the race is muddy at best.

            But the counter-factual for Nader is not. No Nader = President Gore. Period. The margin of victory is too close, the number of voters is too large and the Nader demographic is too skewed towards the traditional Democratic base. If you eliminate Nader, Florida goes for Gore election night (which may be why they did call it for Gore early on election night).

          • ryan says:

            It’s not necessarily a protest vote. Look for example at his immigration policy proposal:

            https://www.donaldjtrump.com/positions/immigration-reform

            A majority of voters support pretty much all of those measures. I don’t know if Trump supports them. And I really do get the impression that I’m an Eskimo and he’s selling me ice. But if people vote for him because they think that’s the sort of policies he’ll push for, it’s not a protest vote.

          • Anthony says:

            Stillnotking: the 2000 election was swing by Florida’s gay black Republicans.

        • Shenpen says:

          Given that presidents are just figureheads and the real power is in the hands of bureaucracy to business to fuck knows what, a giving the finger protest vote is actually a rational choice. Basically, seeing the president solely as an entertainment provider.

          I mean, why do you think Italians ever voted Berlusconi? Because it does not really matter anyway. So might as vote for a good clown. One that gives the finger to all the right kinds of people.

      • shemtealeaf says:

        Do we have evidence that Bush has been ‘killed’ by the low-energy thing? There’s no polling that’s recent enough to pick up on that, but the prediction markets still seem to have Bush as the most likely nominee.

      • jimmy says:

        As a fellow hypnotist I very much appreciate the power of engineered persuasion, and I do find trump especially interesting from this perspective.

        I also understand the certainty that one can have when no one can have when they know that no one else sees what they’re seeing. If you were to say “Everyone else is missing the point an underestimating trump, and I’m 98% sure of that”, then I’d probably more or less agree.

        However, I’m not with you on the 98% chance of him winning. Not even close. If you’re *genuinely* that confident, it’d be fascinating to see what makes you that confident, and how you explain why another *hypnotist* doesn’t see it as anywhere near 98%.

  27. Deiseach says:

    Sam Altman, “head of Silicon Valley’s most important startup farm”, says that “if I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Meanwhile, on my blog, people who don’t have a day job betting fortunes on tech successes and failures continue to say they’re 99.9999999% sure that even $1 million is too much.

    And how much of that sweet, sweet government funding would go to companies that Mr Altman’s “farm” sponsors? They don’t necessarily have to provide anything useful, they just need to churn out research about “Here’s how to have safe AI” and until we get far enough along the line of having genuine AI needing regulation to test it out on, we’ll have little idea do the recommendations actually work. And meanwhile the “farmer” gets his cut of the largesse.

    If Mr Altman is willing to put his own company’s money into encouraging R&D start-ups working on AI risk, instead of encouraging the government to put money into it, or indeed has already done so, please show me.

    • drethelin says:

      Is your position that, rather than being very good at making money based on his predictions, Sam Altman is very good at CONVINCING people about how good he is at predicting and therefore makes money when everyone turns this into a self-fulfilling prophecy?

      • Deiseach says:

        When I see any statement that begins with “The government should pump tons of money into…”, I look at who’s speaking.

        A charity? A business group? A union?

        In this instance, we have a businessman representing commercial interests recommending that the government should hand out funding. Now, if he really thinks this is a huge and present threat, is he also actively seeking start-ups that are interested in researching this risk? “Put your money where your mouth is” is a rule of thumb here.

        I have no doubt but that he’s interested in the possibility of AI risk, and naturally (like everyone else) thinks the government should be putting money into his particular interest. But why is he saying “If I were president, this is what I’d do”? Why not use his clout and connections to get private investment funnelled to likely groups (like MIRI) who have an interest or are active in this research? Is the non-profit arm of Y Combinator getting involved here, or if not, why not?

        I’m sure he has a very good head for business and for picking likely tech start-ups that can benefit from the coaching, networking and intensive preparation the company he presides over provides. I’d be even more interested to see his estimation of the scale and time frame for AI risk – does he think this is going to happen within the next ten/twenty/fifty years?

      • lmm says:

        That is basically Y Combinator’s business model. Altman makes money when one of the companies he picked gets a Series A. That’s all. Whether that company will ever turn a profit is irrelevant.

        • Douglas Knight says:

          You seem to be saying that YC cashes out at the A round. I don’t think that is true at all. I believe that the only way they make money is if the company is acquired or IPOs. Even if they do cash out some money at the A round, they make a lot more money from the equity that they continue to hold.

          However, it is true that YC companies have a very high rate of A rounds, which may be due to a self-fulfilling prophecy. That makes it a lot easier for his companies to make money, but they still have to do that.

          • Deiseach says:

            I believe that the only way they make money is if the company is acquired or IPOs

            Being determined to pour cold water into the vessel of the milk of human kindness – suppose the government starts pumping billions into AI risk research?

            This will begin to look like a profitable opportunity to the market. So companies or businesses or institutes that engage in AI risk research will look like tasty fruit waiting to be harvested – after all they may come in for a tranche of that lovely lolly the government is flinging about.

            Step up Y Combinator and their proven track record at picking winners! Why, here’s their latest crop of AI research start-ups! Quick, let’s get them publicly quoted! Or have our guys take them over by buying the founders out!

            And that nice chunk of equity gets turned into a nice chunk of profit for Y Combinator.

            Not saying that this is what is motivating the man, but it would not be beyond the bounds of possibility that he might have an idea or two what start-ups could benefit from some government encouragement.

        • nonanon says:

          Nah, that’s false.

          More than half of YC’s returns, ever, come from AirBNB and Dropbox, which have not exited yet. YC funds the most marginal of startups, and startups in general have a power law distribution of returns, so over a longer time period (YC intends to last for generations), there will probably be a single startup whose returns totally dominate anything else YC has ever done.

          Probably something involving energy, AI, or a new religion.

  28. I loved this comment: “You say ‘astrologer’, I say ‘social priming theorist ahead of his time’.”
    Comedy gold! The anti-priming article was hilarious too 😀

    • Richard Metzler says:

      I’m not sure if that was actually comedy. From what little I’ve read, occult practices could turn out to be a treasure trove of applied psychology, if someone were to dig through the layers of bullshit. (Or show why the layers of bullshit are needed for this stuff to work.)

  29. Having reviewed the island photos, I think I speak for all of us when I drew up the schematics for the obvious design for our SSC island complex[imgur].

    Would be cool if someone did have a good idea that everybody could crowdfund.

  30. You’ve probably heard by now that the psychology replication project found only about half of major recent psych studies replicated. If you want you can also see the project’s site and check out some data for yourself … Related (though note this is an old study): journals will reject ninety percent of the papers they have already published if they don’t realize they’ve already accepted them.

    Somebody please please please come up with a practical solution to academic publication bias, and the system generally, soon. 🙁

  31. If nutrient deficits were the main barrier to high IQ throughout human evoution, both this trend in IQ and the significant variation in human intelligence would actually make a lot more sense. It’s only now that we’re actually maxing out a significant part of the population.

    Of course, there’s no danger we’ll max out actual intelligence. If anything we’re taking a fairly good crack at the opposite.

  32. Zakharov says:

    The entire wiki article on the Trobirand Islands is great. From earlier: “When inter-group warfare was forbidden by colonial rulers, the islanders developed a unique, aggressive form of cricket.”

  33. Jeff Kaufman says:

    for a real look into the Chinese id, see what commenters think when somebody’s daughter wants to marry a Japanese person

    A comment on r/china:

    From what I hear from Chinese friends, most people don’t like tiexue.net. It’s like the Stormfront of China (a quite bizarre comparison, I must say).

    and:

    Tiexue is commonly regarded as a haven for uneducated ultra-nationalist bigots. Those people don’t represent the majority’s views.

    On the other hand, my grandmother grew up in China until leaving during the Japanese invasion, and despite becoming a Quaker later on she hated the Japanese all her life.

    • onyomi says:

      Yes, although I think it may be getting a little better in recent years, Chinese hatred of the Japanese is very real, even among people young enough not to have personally experienced any war crimes.

  34. Jeff Kaufman says:

    Schwab study looks at how five different strategies for market timing would have worked over the past twenty years.

    In their small print they say “had fees, expenses or taxes been considered, returns would have been substantially lower.” It would be nice for them to have addressed that! What does that do to the ratios?

    • Douglas Knight says:

      I think that the only effect on the ratios would have been to make the cash strategy worse, since it is subject to higher taxes, and because its higher taxes compound. The timing strategies are all the same number of transactions, one per year, so there is no variation in the amount of transaction fees. They are all buy-and-hold strategies, so they are subject to the same long-term capital gains rates. (Timing does affect the amount of dividends received, which are subject to different tax rates, but I think that this is a small effect.)

      • Douglas Knight says:

        Oh, except dollar-cost averaging means getting 12 fees instead of 1, so that will do a little worse.

  35. Josiah says:

    Re: the study about IQ and genes/poverty

    Adjusted for inflation, $12,000 today would be about $1,500 fifty years ago. That’s about the poverty threshold for a single male (the overall poverty rate was 19% then). So it wouldn’t have been anywhere near half the population fell into that category.

  36. Deiseach says:

    Okay, having sufficiently flogged the AI risk horse, I’ll move on to another link here.

    Online and mobile dating sites increase HIV prevalence when they enter an area. The quasi-experiment suggests they’re responsible for about a thousand extra HIV cases in Florida.

    Isn’t this rather a “grass is green” announcement? “Rates of STIs increase when more people are having more casual sex” is hardly strange, new or startling news; yes it’d be nice if people practiced safer sex but when they’re young free and single and looking to mingle, plus (very likely) alcohol and recreational drugs are in the mix, it’s not surprising that they don’t stop to tick all the boxes on the checklist.

    • Saint_Fiasco says:

      The odd thing in my opinion is that having casual sex through an online service means all your casual sex is deliberate and premeditated. They should have plenty of time to tick all the boxes in the checklist.

      • jaimeastorga2000 says:

        To quote Methods of Rationality, “I think that you far underestimate the rarity of common sense.”

      • Deiseach says:

        It may be my advanced years or my curmudgeonly nature, but do you really think people on Grindr, Tindr, etc. are more concerned with finding a lasting partner or getting their ashes hauled that night?

        “A standing prick hath no conscience”, as the saying goes. Besides even if people are being careful, having opportunities to access a greater number of casual partners and so have more promiscuous* sex must mean increasing your chances of picking up something (besides that fine thing in the restaurant).

        *Not used in a moral judgement sense, please let’s not derailed into an argument about sex positivity or slut shaming.

  37. onyomi says:

    I’ve always thought the amount I learn was inversely proportional to the amount of time I spent in class, though there is also a non-zero amount of time in class which causes me to learn more than no class at all, basically because it keeps me motivated and focused on learning whatever the class is about, and gives me a chance to clear up questions which may have arisen in the process of basically teaching myself.

    I used to think I was weird because I both like teaching myself things and find being in class extremely exhausting. I now think I am less unusual, maybe even typical.

  38. Jordan D. says:

    Prosecutorial immunity is the broadest form of immunity of which I am aware- so long as they are not willfully violating laws (and even then…) their discretion is afforded practically total deference. There’s no doubt in my mind that merely ‘using a test which everyone in the field agrees isn’t the right test’ won’t get past an immunity defense, as a prosecutor could credibly claim that he’d just chosen to believe the wrong expert.

    On the other hand, the people in that thread are being a liiiittle pessimistic. While it’s true that Pottawattamie County v. McGhee got dismissed without any sort of sanction against the prosecutor, we don’t actually know how the court would have ruled; it was dismissed persuant to Rule 46 of the Supreme Court, which means that the parties settled and withdrew. And it seemed to me like the Justices at oral argument weren’t exactly cheering the prosecutor at every turn.

    • brad says:

      Judicial is as bad or worse.

      Stump v. Sparkman, 435 U.S. 349 (1978) immunized a judge that ordered a teen sterilized even though he had absolutely no authority to do so.

      Mireles v. Waco (1991) 502 U.S. 9, immunized a judge who when issuing a bench warrant for an absent attorney, instructed the police to “rough him up a little”.

      • Jordan D. says:

        I grant that as true, although those particular cases are pretty extraordinary abuses.

        On the other hand, prosecutorial immunity is statistically more likely to be troubling- the vast majority of cases never move to trial, and the prosecutor has an oversized influence on how those few trials proceed. If we take even a few of the libertarian-ish claims (some made by no less a figure than Judge Kozinski) about procedural or evidentiary abuse by prosecutors as true (abuse which, we must remember, would at worst get that one case thrown out if discovered), it’s clear that they are the most dangerous point of potential failure in the system.

        If we look at judicial abuses, we mostly see situations in which they countenanced a prosecutorial abuse or some such; cases like Stump, or Band’s Refuse Removal Inc. in which the judge takes affirmative lawless action are rarer, and I suspect much more likely to be reversed on appeal.

        …but I have to cop to bias here. People very reasonably accuse my profession of having something of a clerical mythos in which judges are sacred and respected arbiters, and I do tend to believe that to some extent.

        (Also to clarify- I agree with your assessment of broadness entirely, I was just adding a new dimension of relevance)

    • gwern says:

      which means that the parties settled and withdrew

      So in other words, ‘the prosecutors’ employers paid for a settlement and the prosecutors never went to jail’. Quite a deterrent there.

      • Jordan D. says:

        Oh, I’m not arguing that the prosecutors were deterred in that case- I’m saying that I think the people in the thread were making a representation of the case as precedential when it wasn’t.

        There’s no doubt that Pottawattamie County was a horrifying case, and the 12 million paid mostly by insurance for 26 years of illegitimate imprisonment (7 million to one, 5 to the other) is woefully inadequate compensation. It’s just not evidence that the courts would hold prosecutors in a similarlly egregious Brady violation totally immune.

        • brad says:

          His point though is well taken. Even if you are right and prosecutorial immunity isn’t as absolute as some might think, at best you get a situation like the qualified immunity for police officers. Which rarely or never impacts the actual police officer responsible but instead takes money from their victims broadly construed to give it to their victims narrowly construed.

          In addition to the maybe broken civil response to prosecutorial misconduct, there’s also a totally broken criminal response, and a mostly broken professional oversight response. Even the political response doesn’t work particularly well given that line prosecutors are often protected by civil service rules from even their elected superiors.

  39. Daniel Armak says:

    > Once the yam houses are full, a man performs a special magic spell for the hamlet that wards off hunger by making people feel full.

    We should use this ritual as a diet aid!

    • Godzillarissa says:

      Do you trust the magic to not actually make you full with magic high-calorie muffins, though?
      I know it says “feel full”, but what does that Wiki-author know about magic, anyway?

  40. Adam says:

    “Please consider epigenetic inheritance studies guilty until proven innocent at this point.”

    I’m not sure I understand what this means. Is this for or against epigenetic explanations?

  41. ADifferentAnonymous says:

    The pipeline article is summarized a bit misleadingly. The article is specifically about post-bachelor’s education, whereas ‘nothing that happens after bachelor’s’ makes our sound like it includes industry.

    • Blue says:

      Additional ways it’s misleading:

      It’s not clear from the summary that this only applies to “students who have achieved a Bachelor’s”. There can be plenty of attrition in getting that Bachelor’s.

      More egregiously (to me) their definition of “in academia” is only “had a PhD”, not anything more advanced. Ie, attrition in the PostDoc phase (which is huge) is not measured here.

      It’s good that grad school itself does not represent such a leak in the pipeline, but grad school applications are one of the areas where it is *easiest* to be aggressively egalitarian.

  42. Tangent says:

    I’m finding the zoning link interesting. I’d heard people from the US talk about zoning laws but never really understood what they were, as I don’t think we have them in the UK. It looks much more draconian that I expected, with whole areas where only one narrowly-defined type of building is allowed.

    Here there is plenty of regulation around land use (probably much stricter overall) but as far as I can tell it’s more down at the level of ‘use classes’ for individual buildings. Certainly the road I’ve been living on has houses, flats, a couple of small parades of shops, a car repair garage, a big higher education college, a church and a couple of pubs. It’s more variety than usual in a residential area but nothing really out of the ordinary.

    • Douglas Knight says:

      I think British zoning is more restrictive than American.

      The article is about something that Japan does better than America, but when Americans complain about “zoning,” they usually aren’t complaining about that thing.

      “Zoning” just means standardization of building regulations. Sometimes the zones have stupid regulations, like completely barring commerce in residential areas. The Japanese system doesn’t let every individual town make its own unique mistakes. But most of the time people are complaining about much more intentional regulations. People who like high density complain that they are in low density zones. Or they complain that existing high density zones still require parking. This is an intentional rule stemming from a different vision of the city.

      But the main thing that Americans complain about “zoning” is regulations that are not intended to be followed. In some towns, including most cities, almost no building follows the regulations, but instead gets special exemption. That means that the variance board is a bottleneck that approves every decision. Such regulations are quite intentional, for the purposes of giving power the board. This is partly corruption and partly so that development can be guided in ways that the public probably wants, but doesn’t want spelled out. But it’s hard to tell what’s going on, because that’s the whole point.

      You could accomplish this under the Japanese system by just zoning an area less dense than you really want it, but in America it is usually accomplished by more subtle building regulations.

      • HeelBearCub says:

        @Douglas Knight:
        “In some towns, including most cities, almost no building follows the regulations, but instead gets special exemption.”

        Do you have some citation for this? Or is it more your gut feeling?

        “That means that the variance board is a bottleneck that approves every decision. Such regulations are quite intentional, for the purposes of giving power the board. ”

        It strikes me that loose rules lead to those who want to build frequently gaming the system. Tight rules require many more variance board approvals. I would say this is roughly equivalent to “type1/type 2” errors and you have to pick your trade-off. There isn’t a need to ascribe nefarious motives.

        • Anthony says:

          As I just commented above, Douglas Knight’s claim that “In some towns, including most cities, almost no building follows the regulations, but instead gets special exemption.” is exactly how Trump works. He knows the right people to persuade or bully or bribe to get the variance needed to build his project.

          And yes, in lots of cities, especially on the coasts, *everything* needs a variance. I’m not entirely certain that the system is intentionally setup to encourage graft, but it certainly does produce quite a lot. Some part of the problem is that many places were zoned decades ago, and the maximum lot cover or minimum house sizes, or whatever, don’t make commercial sense anymore, and building a profitable project requires a variance.

          • HeelBearCub says:

            “And yes, in lots of cities, especially on the coasts, *everything* needs a variance.”

            Again, citation. I don’t think it is common knowledge. Do you know this to be true, or have you just heard it?

            I’m not saying it isn’t true, just that the claim is very sweeping. As an example, any rules that are in effect for Manhattan aren’t representative of pretty much anywhere other than NYC.

            So, how many cities, how many permits, in aggregate, of the total, require a variance each year? Do we have any sort of number for this?

        • BBA says:

          Let’s look at concrete examples. New York City has a 3800-page zoning resolution, not including the maps. (And those are full-sized pages of text, not that wide-margin large-print format that Congress uses.) There are over 100 different types of zones defined, and “overlays” in various parts of the city to allow or encourage otherwise disallowed development. It’s so complex you practically have to apply for a variance to figure out whether you need one. This is a lot of effort for “rules nobody was meant to follow” even if the variance process is more likely than not – it’s more of a starting point for a negotiation.

          Small-scale development (a single-family residence in a neighborhood zoned single-family residential) will probably comply with the resolution, with no need for a variance. Medium-scale development (an office building in Midtown) runs into the default floor area cap being artificially low. There are various ways to increase the cap – buy air rights from nearby low-rises, dedicate part of the property as a public plaza – or the developer can apply for a variance. Most likely, they’ll do both, using the bonuses to improve their bargaining position with the various agencies involved.

          Large-scale development (whole neighborhoods, like Atlantic Yards in Brooklyn) will often involve getting a state agency nominally involved in the construction to bypass the zoning process entirely.

          There are also lots of miscellaneous rules that make little sense. A “physical culture establishment”, such as a health club, requires a special permit from the variance board anywhere in the city. I read somewhere that this was originally meant to be an anti-prostitution measure – it also covers massage parlors.

      • BBA says:

        Britain doesn’t have “zoning” in that land use isn’t regulated by a system of zones. Instead (as I understand it) they just require that any change in land use be approved by the local planning board. This is more restrictive in theory than the American system but in practice it works out about the same.

        In America, it’s often claimed that Houston doesn’t have zoning, but it’s more accurate to say it doesn’t have zones. There are still citywide density restrictions in place so it won’t be turning into Manhattan anytime soon.

  43. Jaskologist says:

    I’m glad somebody is finally looking into disentangling the fact that religiosity is measured by church attendance and that extra-curriculars of any kind have similar benefits (and even gladder that the effect remains). I’ll also take this opportunity to renew my plea for Scott to deep-dive the data on religious benefits.

  44. Ever An Anon says:

    I am very skeptical about the idea that a medical education increases a Pakistani woman’s chance of getting a good husband. I can believe that they believe that, but from what I’ve seen from the Arab / Indian girls in medical and engineering schools it looks pretty much like the exact opposite.

    Being a decade older and having had sex with several American guys, plus one or two other girls in a lot of cases, cannot be good for finding a husband. Speaking as one of the American guys I appreciate the existence of this misconception but it’s probably not very helpful for the girls.

    • Deiseach says:

      But are those Arab/Indian girls in American universities, as distinct from universities in Pakistan? A bit of quick Googling got me this dating site ad:

      We are seeking an educated girl for our son. He is a professional engineer and currently working for a university in Karachi, Pakistan. He is 26, very religious and has a sound family background. We belong to a very respectable urdu speaking family from Karachi.

      We are seeking educated, modest and home maker girl. She should be religious and well versed with family values.

      A second one (edited by me) from the same site states:

      Sunni Muslim Hanafi parents (perform Fatiha & Durood) inviting Alliance for their Son,
      Age :- 28yrs
      Qualification :- Electrical Engineer
      Height :- 5’11”
      Occupation :- Electrical Engineer
      with Good Handsome Salary & Family Visa status.

      The Girl must at least be a graduate, from a religious and sophisticated family, from Hyderabad.

      And No. 3 is looking to get both himself and his sister hitched:

      I am a Sunni Muslim from Udupi Karnataka State India worked as Medical Engineer in Gulbarga dist. Karnataka State, I need religious Sunni family educated girl for marriage, a lady radiologist or pathologist is most well come other wise a graduated girl please call my father [I removed number]

      Also I wanted a good sunni muslim gentleman for marrying my graduated younger sister

      A girl with a medical degree ready to get married as soon as she graduates doesn’t sound like an impossible match to make! No. 3 makes a point of it that his sister is a graduate.

      • Ever An Anon says:

        International students. The ones with wealthier parents did undergrad here too, the rest had gone to schools in their own countries beforehand. Presumably American schools would be more prestigious for the purposes of marriage the same as in the jobmarket.

        Maybe they just don’t know what happens at university, or they “well come” a bit more exploration in their brides-to-be than one would expect from religious conservatives.

  45. “But a new study finds that religious activities are better for your mental health than other forms of social participation.”

    Anyone have a copy of this article (and accompanying web material, if any) that they’re willing to share? It’s paywalled, and $39 strikes me as excessive. I’d like to check out their statistical methodology and do my own analysis of their data, if they’ve made it available.

  46. onyomi says:

    I don’t know about hypnotism, but I do think Trump is a master of the art of projecting extreme self-confidence. Most people are just looking to follow someone who seems like they know what they’re doing. Trump strongly projects that he does, even if he seems to be ludicrously wrong on many object-level issues. I even saw some survey participants after a Trump speech specifically comment “he seems confident, which makes me feel confident.”

    Not coincidentally, I’m sure, projecting extreme self-confidence is probably a great quality to have when you’re looking for people to invest in your crazy real estate project/casino.

    • Anthony says:

      projecting extreme self-confidence is probably a great quality to have when you’re looking for people to invest in your crazy real estate project/casino.

      Also to get bureaucrats to give you permission to build something that’s nominally against the rules. Trump is a developer in New York, which means his skillset involves obtaining permission to do what doesn’t require asking in Houston or Akron. Though Trump also uses his ability to bully bureaucrats to steal other people’s land.

  47. rminnema says:

    Higher school starting age lowers the crime rate among young people. Four day school week improves academic performance. It would probably be irresponsible to sum this up as “basically the less school you have, the better everything goes,” but I bet it’s true.

    This is why my wife homeschools our kids four days a week. (I kid. A little.)

    • LTP says:

      Couldn’t this just be (largely) measuring class? Richer kids are less likely to be criminals and more likely to be academically successful. Richer parents are more likely to game the system to have their kids be older when they start school so they perform better on tests (and get into elite high schools and universities when they’re older), and richer parents are probably more likely to have their kids enrolled in alternative schooling that would only have a 4-day week, and more able to be able to afford to take the time off to care for their kids (or pay for the care) on that now-free 5th day. So, yeah, the rich kids commit crimes less and are academically successful, but the educational differences are incidental.

      • Emile says:

        Richer parents are more likely to game the system to have their kids be older when they start school so they perform better on tests (and get into elite high schools and universities when they’re older)

        I would expect the opposite – that richer parents game the system to have their kids be younger when they start school.

        (Some quick googling didn’t find out whether they do or not)

        • C says:

          It could just be that the older kids are less likely to be bullied, and therefore socialize better?

        • Anthony says:

          I would expect the opposite – that richer parents game the system to have their kids be younger when they start school.

          As a rich person with kids, I have to say, it depends.

          California has pushed the cutoff date for Kindergarten entry to September 1 (it had been set by district, usually in December, then standardized statewide on December 1, then pushed back to September 1 effective last year). My younger daughter was born in mid-September. I wanted her to start regular Kindergarten, but we were told we couldn’t (and weren’t able to game the system otherwise). But she had shown she was a fairly bright kid and would not have had trouble keeping up academically.

          If my older daughter had been the one in that situation, I’d have been ok with keeping her back because while academically she’s way ahead, her social development is “below grade level”.

          For parents of sons, it makes more sense to have their sons start a little older. One common reason really is red-shirting – a kid who’s a year older will do better in school sports. However, there’s also some amount of self-confidence boost from being one of the older kids in your class, and that benefit is larger, and more beneficial, for boys.

          I desperately wanted to skip a grade through much of later elementary and middle school, for academic reasons. Looking back, I’m glad my parents didn’t do it – socially it would have been a disaster. My high school was a little more flexible and pushed a little harder academically, and I understood about my level of social skill so the desire to skip a grade faded then.

          • Saint_Fiasco says:

            Girls also grow faster than boys at that age, both in physical athletic performance and academics.

            When I was a kid, the smart boys were accused of being “girly”, because girls usually did better at school.

          • Cadie says:

            I skipped two grades. It was okay in elementary school, when it was only one skip anyway, but high school was a complete disaster. And this is even with physical development so far ahead of my age that I didn’t look any different from my peers and I didn’t have trouble with the classwork. It was my social development that was too far behind, especially since it started out behind and I eventually got left in the dust. My parents were so stoked about their daughter going to be a 16-year-old college student that they didn’t notice or care that my social development had stalled and emotional problems were exploding in my face.

            If I ever have kids (which is looking less and less likely, given that I’m 36 and still have very poor ability in romantic relationships) they are not starting school early or skipping grades. There are other ways to intellectually stimulate an intelligent child. Social success is harder to fix if it gets borked up, it’s at least as important to wellbeing as academics are, so IMO it’s best to make sure they don’t have the cards stacked against them before they even start.

          • James Picone says:

            I started school at four-and-a-half (july birthday) and later skipped a grade.

            Given the opportunity, I wouldn’t change that. I don’t think being older would have reduced the bullying I got or made me more socially adept, it would’ve just prolonged the torture.

      • Douglas Knight says:

        No, the higher school starting age study was not about red-shirting, but about birthdays.

        No, the 4 day week study is not a study of alternative schooling, but of public schools. There could be class confounds about which schools did this, but it was generally more rural schools that shortened the week. And it was probably the more poorer among them. Also, the study was a difference-in-difference that tried to isolate the effect of the change. (Though I think difference in difference studies are generally noise.)

  48. Edward Scizorhands says:

    > Common Knowledge and Aumann’s Agreement Theorem

    I’ve thought about this for years and I still don’t get it. I understand at the immediate level it’s true, and I can repeat that it’s true so I sound smart. But I can’t internalize it or generalize it.

    If you have two muddy children and ask them to stand up if their own head is muddy, they never will, no matter how many times you repeat it. Until you announce one of them has a muddy head, after which they both stand up on the second iteration.

    But why does saying “there is at least one muddy-headed kid” change anything? (Walking through the example doesn’t show me why. It just shows that it is.)

    • Ever An Anon says:

      I’m not a math guy but it seems straightforward.

      If at least one of you has a muddy head, it’s either both of you or just the one. So if the other kid had a clean head you would know your head must be muddy and stand up. Alternately his head is muddy and you have no new information… until he either stands or sits, at which point you know whether your head is clean or not.

      • Edward Scizorhands says:

        Right, but why does “there is at least one muddy-headed person around” change the game?

        If there are three of us sitting around, and all three of us have muddy heads, we all know that we all know that we all know that (etc to infinity) there is at least one muddy head, right?

        • InferentialDistance says:

          If there are three of us sitting around, and all three of us have muddy heads, we all know that we all know that we all know that (etc to infinity) there is at least one muddy head, right?

          No: person A doesn’t know that person B knows that person C knows that there is at least one muddy-headed person. Since person A doesn’t know their own muddiness, they don’t know whether person B sees one or two muddy-headed people; therefor, from A’s perspective, person B may only see person C as muddy, and therefore may think that person C sees no muddy people. A’s perspective (being able to see that B is muddy, and that C can see B) does not transfer to B (who cannot see themself), and therefor A does not assume that B knows that C knows that there is muddy-headed person.

          Therefor person A does not know if person B knows that person C knows that there is at least one muddy person, until the statement is uttered.

          For all N, there is an N-long chain where knowledge-of-knowledge fizzles out, unless the statement is uttered. Only after the statement is there infinitely reflexive knowledge.

        • rsaarelm says:

          I got the idea that the inductive chain needs everyone to have their clocks synchronized, so to speak. Start with the simple case of two children with muddy foreheads. Before the utterance, both are just looking at each other and thinking “he doesn’t know, poor guy”. After the utterance, both will go, “okay, game is up, he knows it now”. Then the induction clock bell tolls, by which time everyone is obliged to act if they have a muddy forehead. Both kids now notice that the other one hasn’t done anything yet, and then both go throw themselves into the live volcano as the rules of the scenario demand. They need common knowledge of the starting point for the chain of induction.

          So what about many kids who all have dirty foreheads. Everyone sees two muddy foreheads, so everyone already knows everyone knows there is a dirty head. With two kids, there was the possibility that one of them had a clean forehead, so the dirty one would see only clean foreheads and would promptly go throw himself into the volcano. With several kids, it seems like they need to think “Yay! Inductive reasoning suicide pact time!” instead of “No duh” when they hear someone call out the knowledge and do the clock synchronization. But supposing they really want an inductive reasoning suicide pact party, the clock-synchronizing outsider callout is the only point where they can start having one.

    • InferentialDistance says:

      But why does saying “there is at least one muddy-headed kid” change anything?

      Because neither child knows what the other child knows. Because they don’t know that they’re muddy, and therefor don’t know what the other child sees. Once the statement is uttered, they both know that the other knows that there is a muddy-headed child, and can use their reaction to figure out whether or not they’re muddy-headed.

    • FullMeta_Rationalist says:

      At the beginning of the game, Edward and Francine have no knowledge. In the Edward and Francine’s minds, the situation could exist in 1 of 4 distinct possible states. In state A, Edward and Francine are both muddy. In state B, Edward is muddy while Francine is immaculate. In state C, Edward is immaculate while Francine is muddy. In state D, Edward and Francine are both immaculate.

      The solution is basically a process of elimination. We must keep track of each person’s model of reality, but also their model of each others’ model. Notice that reality must underlie all models. This might seem obvious, but it’s important to note that all models must have at least one state in common, which corresponds to the underlying reality.

      ........................iteration 0.................................
      ........Edward's model of:......|...............Francine's model of:
      Ed's model;.....Fran's model....|.......Ed's model;.....Fran's model
      ................................|...................................
      ...Fm Fi...........Fm Fi........|..........Fm Fi...........Fm Fi....
      Em [A][B].......Em [A][B].......|.......Em [A][B].......Em [A][B]...
      Ei [C][D].......Ei [C][D].......|.......Ei [C][D].......Ei [C][D]...

      The game starts. Edward and Francine look at each other. They see the other person is muddy, and update their OWN models accordingly (by culling viable states).

      ........................iteration 1.................................
      ........Edward's model of:......|...............Francine's model of:
      Ed's model;.....Fran's model....|.......Ed's model;.....Fran's model
      ................................|...................................
      ...Fm Fi...........Fm Fi........|..........Fm Fi...........Fm Fi....
      Em [A][_].......Em [A][B].......|.......Em [A][B].......Em [A][B]...
      Ei [C][_].......Ei [C][D].......|.......Ei [C][D].......Ei [_][_]...

      At this point, neither of them stand up. The fact that the other person is still sitting has the potential to transmit information, but not at this point. So nothing changes.

      ........................iteration 2.................................
      ........Edward's model of:......|...............Francine's model of:
      Ed's model;.....Fran's model....|.......Ed's model;.....Fran's model
      ................................|...................................
      ...Fm Fi...........Fm Fi........|..........Fm Fi...........Fm Fi....
      Em [A][_].......Em [A][B].......|.......Em [A][B].......Em [A][B]...
      Ei [C][_].......Ei [C][D].......|.......Ei [C][D].......Ei [_][_]...

      When a bystander says “hey, at least one of you are muddy”, both Edward and Francine reason that they can’t both be muddy. So in their own model, state D is no longer viable. They already knew that personally, so the knowledge doesn’t change their own model directly. But it does change how each person models the other person’s model. I.e. Edward knows Francine isn’t stupid, so he realizes that Francine will certainly cross off state D regardless of what her mental model was before the bystander spoke (and vice versa). This means the impossibility of state D has become common knowledge. This is important in the next iteration.

      ........................iteration 3.................................
      ........Edward's model of:......|...............Francine's model of:
      Ed's model;.....Fran's model....|.......Ed's model;.....Fran's model
      ................................|...................................
      ...Fm Fi...........Fm Fi........|..........Fm Fi...........Fm Fi....
      Em [A][_].......Em [A][B].......|.......Em [A][B].......Em [A][B]...
      Ei [C][_].......Ei [C][_].......|.......Ei [C][_].......Ei [_][_]...

      Next iteration: they look at each other again. Suddenly, the fact that both are sitting takes on new meaning. It allows us to start reasoning counterfactually. Let’s walk through Edward’s perspective.

      From Edward’s perspective, only two states are viable: state A and state C. Edward assumes state A is true. If Edward were muddy, then Francine still wouldn’t know whether to stand or sit. Ho hum.

      ........................iteration 3.1?
      ........Edward's model of:......|
      Ed's model;.....Fran's model....|
      ................................|
      ...Fm Fi...........Fm Fi........|
      Em [A][_].......Em [A][B].......|
      Ei [_][_].......Ei [C][_].......|

      Next, Edward assumes state C is true. If state C were true, then Edward would be immaculate. Edward also realizes (from his mental model of Francine’s model) that if he were immaculate, then Francine would also realize that state C were definitely true (because remember, it’s common knowledge that D is impossible; the Ei row only permits state C and not D). And if Francine realized C were definitely true, then she’d stand up.

      ........................iteration 3.2?
      ........Edward's model of:......|
      Ed's model;.....Fran's model....|
      ................................|
      ...Fm Fi...........Fm Fi........|
      Em [_][_].......Em [A][B].......|
      Ei [C][_].......Ei [C][_].......|

      But in reality, Edward did NOT see Francine stand up. Edward then concludes that state C is IMPOSSIBLE. So by indirect proof, Edward has proven that state A is the only configuration which makes sense. The situation is symmetric, so Francine meanwhile performs her own indirect proof (except state B replaces state C). Coincidentally, they both stand up.

      What’s important to recognize is that the indirect proof only becomes possible when the impossibility of state D becomes common knowledge. Without it, Francine’s sitting position doesn’t convey any new information within the context of Edward’s indirect proof. The removal of state D (even though it seems redundant at first) is what separates iteration 2 (a dead end) from iteration 3 (not a dead end).

      If you try to generalize this to Edward, Francine, and Greg – we now have to deal with 3 mental models per person. And no, the new mental models aren’t just 3×3 Sudoku puzzles, they’re 2x2x2 Karnaugh maps. There’s too many possibilities to use a process of elimination without adding additional constraints.

      (9 may be larger than 8, but 2^n grows towards infinity way faster than n^2.)

      —————————————————————

      Actually, I disagree with InferentialDistance. 3 people, all muddy-headed, should be able to infer common knowledge of at least 1 muddy head on first sight. E.g. Edward sees Francine and Greg are both muddy. Edward now knows that at least 1 person is muddy. Edward then infers that Francine sees that Greg is muddy. Edward now knows that Francine knows that at least 1 person is muddy, regardless of whether Edward himself is muddy. Since the situation is symmetric, that “at least one person is muddy” generalizes to common knowledge. QED.

      The problem isn’t the lack of common knowledge. The problem is that the particular piece of knowledge (which happens to be in common) is no longer enough to let us reduce the (now larger) state space down to a single possibility.

      • FrogOfWar says:

        In its technical sense, “Common knowledge”requires unlimited recursions (A knows that B knows that…). InferentialDistance already showed why that doesn’t hold.

        • FullMeta_Rationalist says:

          Hm… I stand corrected. My mistake was generalizing to n=1,000,000. Such a scenario Should easily generate common knowledge that “everyone is immaculate” is false. Upon further reflection, it seems that n=3 is not sufficient to rule out common knowledge. But n>3 is sufficient.

          Construct a triangle from Edward’s POV. The common knowledge breaks down between Francine and Greg because they need a muddy person that Francine and Greg can both see to serve as a reference point. Ed is the only person that Francine and Greg can both see, but he has no self knowledge. So n={2, 3} fails.

          When n=3, Edward does not have a solid reference point for him to anchor common knowledge between Francine and Greg. But a 4th muddy wheel named Hannah could provide that anchor point between Francine and Greg from Edward’s POV. And Hannah herself would have common knowledge of the falsehood of “everyone is immaculate” because the muddiness of Fran (or Greg) is apparent to all but Fran.

          Given my earlier mistake, I’m not confident that this new hypothesis is correct either. But I think n>3 circumvents the infinite-recursion’s information-decay by providing overlapping circles of common knowledge centered around 2 fungible loci of awareness.

          I.e. if Edward the protagonist is on the outside of the circle, and Francine and Greg are the 2 loci, then everyone on the outside of the circle has common knowledge that the muddiness of either loci disproves the “all immaculate” state. But Edward needs at least one more muddy person on the outside of the circle (Hannah) to anchor his suspicions of common knowledge between the two loci (and the rest of the circle (except Hannah)).

          It seems weird that 4 is the magic number at which common knowledge that “all are immaculate” is false starts to bootstrap itself. I think the reasoning is that common knoweldge is a game for two. The referent of the common knowledge can be reflected inward or to a third party, but the no-self-knowledge clause eliminates the possibilities for both “reflection inward” and “projection onto a third wheel” to be sufficient to generate common knowedge for all parties.

          • FrogOfWar says:

            It’s like a mathematical induction. Each time you add a new person with a muddy head, you just add another layer of knowledge that you have to go through to get the result.

            For 4 People, all with muddy heads:

            A doesn’t know if their head is muddy. So A doesn’t know that there are at least 4 muddy heads.

            If A’s head isn’t muddy, then it is an open possibility for everyone else that there are 2 unmuddy heads. So it is open to A that it is open to B that there are two unmuddy heads. A doesn’t know that B knows there are at least three muddy heads.

            But if that’s open to B, then it will also be open to B that it is open to C that there are three unmuddy heads, since C just adds their ignorance of their own head to the possibility of A and B having unmuddy heads. So, A doesn’t know that B knows that C knows that there are at least 2 muddy heads.

            Do the same process one more time and you get that A doesn’t know that B knows that C knows that D knows that there is at least one muddy head.

          • FrogOfWar says:

            Actually, let’s make it closer to a mathematical induction.

            Base Case

            When n=1, there is not common knowledge that at least one head is muddy.

            Induction Step

            Suppose that there is not common knowledge for n-1 people with muddy heads.

            Consider the case of n people with muddy heads. It’s an open possibility for the nth person that the (n-1)th person only sees n-2 muddy heads (because the nth person doesn’t know their own head is muddy). But, by the assumption of our induction step, in that case the (n-1)th person does not know that the (n-2)nd person knows that. . .the 1st person knows at least one person has a muddy head.

            Therefore, the nth person does not know that. . .the first person knows at least one person has a muddy head.

            Therefore, for all n, it is not common knowledge among n people with muddy heads that at least one person has a muddy head.

  49. Jiro says:

    Each child can observe the reaction of the other child to the statement. This provides the child with additional information by the second iteration.

    You seem to be thinking “each child sees a muddy head anyway, so telling them ‘one is muddy’ provides no additional information”. The additional information is not information about what they see, but about what they can infer from the lack of reaction.

  50. ryan says:

    Further suggestion that genes have more effect on IQ in the rich than the poor. A koan: this study found shared environment only affects IQ for people below the tenth percentile in economic status. The tenth percentile of income is below $12,000. But fifty years ago, probably most people were below $12,000, and fifty years from now, maybe nobody will be below $12,000. Do you think this same study done in the past or future would repeat the finding of a less-than-$12,000 threshold, repeat the finding of a less-than-10% threshold, or something else? Why?

    Don’t focus on the particular amount of money. The real issue is that extremely bad health from in utero to age 3 or 4 can permanently harm a child in many ways. One of them is diminished adult IQ relative to what proper health and nutrition would have led to. The bottom tenth percentile in economic status is where you find this sort of bad parenting.

    • Vaniver says:

      The bottom tenth percentile in economic status is where you find this sort of bad parenting.

      Yes, but think a little more about the question. Are 10% of parents today bad parents (in the sense of their effect on IQ)? What was that number decades ago, and what will that number be decades from now? Is it driven primarily by economic or cultural or other factors?

      • ryan says:

        I highly suspect they decided the tenth percentile was an important percentile only after they had collected and analyzed the data. It just so happens to be where the signal they were looking for started to show up.

        I think the ship has sailed on repeating the analysis on decades ago. On into the future, who knows, maybe the effect disappears as we move into the universal basic income utopia, or is amplified as we move into the universal basic income hellish nightmare.

        I suspect a mix of economic and cultural factors. The kid didn’t eat enough food or they were sick all the time are easy explanations that I think qualify as economic. I wouldn’t mind an experiment with a campaign of “it’s ok to spank your kids from time to time, but never ever hit them in the head.” Obviously I suspect brain damage as an explanation for observing what seem like damaged brains.

  51. Daniel Speyer says:

    MtG is probably best suited to attractive people, those with good social skills, those who fit in well in high-status and wealthy circles, and women

    Where’s the best place to quote this out of context? Somewhere where people will have heard of 100k hours but will first read MtG as Magic: The Gathering.

  52. iarwain1 says:

    On the subject of prosociality / wellbeing and religion, a recent article challenges the conventional wisdom by claiming that, depending on the particular situation, atheism might be just as good or even better for prosociality / wellbeing than religion is:

    http://smithandfranklin.com/current-issues/Atheism-Wellbeing-and-the-Wager-Why-Not-Believing-in-God-With-Others-is-Good-for-You/9/16/120

  53. Deiseach says:

    I am becoming increasingly fascinated by the question of who, oh who, can the Anon who regularly sends me Tumblr asks be, since just now I have got one asking me:

    The scariest thing about Eliezer Yudkowsky is how such an intelligent man, who spent so much time studying cognitive biases and talking about corrupted hardware, in the end used his intelligence to rationalize leading a cult whose members gave him money and let him sleep with their wives and girlfriends, just like every other cult leader in history. Do you agree?

    You will all, doubtless, be edified to learn that I (wo)manfully resisted snapping up the bait to “Let’s you and him fight” 🙂

    Oh bashful little Anon, come out, come out, wherever you are – I don’t bite! (I tend to bolt my food whole).

  54. C says:

    “if I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.”

    How would you even spend $100 billion effectively on AI R&D? Assuming grants essentially pay a salary and an arbitrary $100k/year, that’s hiring a million AI researchers for a year. Or 10k researchers for 100 years. Either way, it seems like a lot more money than can be spent.

    Schwab study looks at how five different strategies for market timing would have worked over the past twenty years.

    “But the study’s most stunning findings concern Ashley, who came in second with $81,650—only $5,354 less than Peter Perfect. This relatively small difference is especially surprising considering that Ashley had simply put her money to work as soon as she received it each year—without any pretense of market timing.

    If you’re tempted to try to wait for the best time to invest in the stock market, our study suggests that the benefits of doing this aren’t all that impressive—even for perfect timers. Remember, over 20 years, Peter Perfect amassed around $5,000 more than the investor who put her cash to work right away.”

    They really handicap their Perfect Timing investor with only 12 different prices throughout the year. In reality perfect market knowledge could also be used to buy derivatives that amplify the gains. So it is disingenuous to claim that Lump Sum investing would land anywhere close to Perfect knowledge.

    • gbdub says:

      I think you’re misunderstanding the intended audience of the Schwab study. It’s aimed at the sort of person who has a chunk of money they’d like to get invested in a buy and hold strategy, but isn’t sure when/if they should jump into the market. Which is probably most people who invest. It’s not aimed at high-frequency day traders or sophisticated market managers – obviously someone with truly perfect knowledge could do a lot better.

      Really the premise is “if you’re going to invest in lump sums and hold your shares, how much does market timing matter” and in that sense I think it does a respectable job.

  55. Nuno says:

    The discussion between Vox day and Luke Muehlhauser was left unfinished just when it seemed they were getting at something. Slightly disappointing.

    Also, does anybody know whether Luke Muehlhauser’s map of the Kalam Cosmological Argument (http://commonsenseatheism.com/?p=1212) has been abandoned incomplete?
    If it hasn’t, where is the continuation?

  56. Randy M says:

    “Argentina sort of has open borders already. Why aren’t people raising money to send Africans to Argentina? Or are we worried that if too many people take advantage of the opportunity Argentina will change its mind?”

    I take it you’re that guy in those High School “coming of age” movies who finds out his friend is throwing a small house party when his parents are away and photocopies the flier 1000 times and spreads it around 3 towns so that the house gets trashed and the cops come over and he gets grounded until he learns a lesson about picking friends better.

  57. I expect peace (freedom from fear of concussions) and quiet (uninterrupted sleep) to matter as much as income (food quality, but also indirectly buys peace+quiet). So assuming the bottom-10% finding today is real (not P-hacked or failing to control for “dummy genes” => bottom 10% income), I think the threshold is absolute ($12k in today’s dollars) and not relative, except as the social order improves or deteriorates near the proposed threshold.

  58. The fact that John Beale was a fraud isn’t evidence that work on climate is fraudulent but the fact that he got away with his fraud for as long as he did is evidence that the EPA is incompetent, which might be relevant to how much weight you give to their pronouncements on climate (or other) issues.

  59. JayMan says:

    A study comparing the association between twins (really interesting design!)

    Twin-control studies are great. Indeed, they should be the norm. As behavioral geneticist David Lykken said, “Almost any kind of research that you would think of doing with human subjects, psychological or medical, would be more interesting if you were doing it with twins.”

    finds that genetics seems to determine the degree to which fast food makes you obese.

    Yes. See:

    Obesity Facts – The Unz Review: Laboratory/surgical interventions

    That is, people with certain genes won’t gain weight from fast food, but people with other genes will. Still trying to decide what to think about this.

    Don’t get too excited:

    1. BMI was measured by self-report. This is less reliable than clinical measurements (the study claims high reliability, but when you’re doing twin control studies, you need highly accurate measurements).

    2. Diet and exercise measures were entirely self-report, which is known to be highly unreliable. Indeed, there was zero heritability for fast food consumption, which contradicts many other studies on diet. That and the finding of a shared environment effect from a sample in a localized area (the Seattle region) suggests that these results are complete garbage – with a high amount of measurement error.

    That a noisy measure didn’t have an effect within MZ twin pairs is then not surprising. That said, it’s consistent with the above overfeeding study.

    Though even if they did find something, it’s important to note that twin control studies can only be used to demonstrate negative results, not positive ones (that’s because MZ twins aren’t identical).

    • Ahilan Nagendram says:

      MZ twins aren’t identical

      Really? Most other behavior geneticists don’t seem to claim that, have a source? And I agree that most twin studies that find shared or even non-shared environment values are pretty badly done, few controls for noise, not accounting that these values are either mostly noise, or genetic in themselves. You can say environment is whatever you want it to be, but established fact is clear; it’s a product of GxG interactions and expression.

      And this is true globally, not even within restricted environments. Studies of random twins from around the world, truly raised apart show this.

  60. JayMan says:

    High levels of national development make countries more likely to be democracies, but democracy does not seem to cause higher levels of national development.

    Umm hmm:

    National Prosperity – The Unz Review

  61. Whatever Happened to Anonymous says:

    >Argentina sort of has open borders already. Why aren’t people raising money to send Africans to Argentina?

    When did you learn that Argentina has open borders?

    More seriously, though, if that ever becomes a prospect, Argentina will change their mind quicker than Deandre Jordan on going to the Mavs.

    • BD Sixsmith says:

      When I think of immigration to Argentina I think of Nazis.

    • nydwracu says:

      Argentina is whiter than the USA — estimates vary from 83% (Wikipedia) to 97% (CIA). The 97% figure does not include mestizos.

      Most immigration to Argentina is from either neighboring countries or Europe, with some immigration from East Asia. The neighboring countries are culturally similar and, as far as I know, aren’t interested in hostile takeovers of large swathes of territory from the hated oppressor, which is a pretty important difference. Speaking of which, you’ve heard of Jorge Ramos, right?

      • Whatever Happened To Anonymous says:

        >Argentina is whiter than the USA — estimates vary from 83% (Wikipedia) to 97% (CIA). The 97% figure does not include mestizos.

        Take in count that, in Argentina, there’s no status to be gained by being “non-white” in any social circle or class (quite the opposite, even), so anyone that could reasonably call themselves white do so. Do those stats come from self reporting? What is the number for the US, and how does it deal with “Hispanics”?

    • Argentonymous says:

      Argentina has open borders as consequence of being very hesitant of ever exerting any authority. Opening the borders was practically easier than controlling them for politicians.

      Being geographically isolated and in a low-population neighborhood, the consequences of immigration have been bad but not so catastrophic as to give the anti immigration movement enough momentum.

      Regarding Africans, Senegalese immigrants have been coming illegally and stayed under ‘refugee’ status. Now that there’s a sizable community of illegal street vendors critics are starting to complain about it.

      • Whatever Happened To Anonymous says:

        >Argentina has open borders as consequence of being very hesitant of ever exerting any authority.

        That is a very cynical interpretation, freedom to enter and live in the country is, like, right there in the constitution.

        >Being geographically isolated and in a low-population neighborhood, the consequences of immigration have been bad but not so catastrophic as to give the anti immigration movement enough momentum.

        While it’s true that it doesn’t have a ton of political traction, there’s a very widespread resentment towards immigrants within the middle and upper-middle classes. I’d say that if the drug trafficking stuff becomes more of a thing in the coming years, it might become a focus point for anti-immigration to rally around.

        >Regarding Africans, Senegalese immigrants have been coming illegally and stayed under ‘refugee’ status. Now that there’s a sizable community of illegal street vendors critics are starting to complain about it.

        Sizable is being generous. Any amount of refugees even near the numbers Germany is bringing in would get everyone up in arms.

        • Argentonymous says:

          >That is a very cynical interpretation, freedom to enter and live in the country is, like, right there in the constitution.

          While the preamble of the constitution (which has no legal value) hints the freedom of movement as something universal, Art.17 establishes it as European only.

          >within the middle and upper-middle classes

          That’s debatable. Middle and upper class urban people form the core of the multicultural progressive scene which promote immigration of these people. Certainly the lower working classes openly express their dislike of latin american immigrants (it’s the subject of lots of football stadium songs) since they are the ones sharing the urban space and public transport with them.

          • Whatever Happened To Anonymous says:

            >Middle and upper class urban people form the core of the multicultural progressive scene which promote immigration of these people.

            Maybe I’m a bit out of touch with the local progressive scene, but it doesn’t seem to me like something that’s being actually promoted. Of course, that could be because we’re already in their optimal scenario. Even then, I feel like we’re far too assimilationist for multiculturalism to gain a strong foothold.

  62. yli says:

    Wow I remember that Luke/Vox debate. Followed it in real time!

    I forgot how utterly mind-numbing Vox was at every turn. Unapologetically so. Ironically while performing apologetics.

  63. 27chaos says:

    Next time you run a Discourse and Dragons Game: https://en.wikipedia.org/wiki/Witch_of_Agnesi

  64. Multiple people on the comments thread to the Scott Adams piece have offered to bet him and he hasn’t taken them up on it. You may need to get in line.

  65. Adam says:

    But seriously on the marrying to give. Forget about marrying. Generalize just to influencing people with money to give as a higher payoff strategy than earning the money yourself. Whether by writing, public speaking, marriage, friendship, even theft, this seems like it has to be the strategy more likely to succeed for most people.

    • Deiseach says:

      Marry rich person. Make sure they change their will in your favour. Knock off rich person. Give 95% of proceeds to effective charity of your choice. Rinse and repeat.

      Marry, Bereave, Re-Marry to Give 🙂

  66. Airgap says:

    How much of Sam Altman’s money has Sam Altman put into AI safety R&D initiatives, and who wants to bet me the amount is consistent with him believing $1 Million is too much?

    • Whatever Happened To Anonymous says:

      While you make a good point, it’s fair to note that Altman’s suggestion also involves spending other people’s money.

      • Airgap says:

        What if his goal is not to move the federal budget, but to project a certain image which is valuable for him to project?

        • Whatever Happened To Anonymous says:

          I agree that is probably part of the reason, but I’d suggest against going full Hanson when all we have to go from is an interview in which he was explicitly asked the question.

          • Airgap says:

            If you think part of the reason is a precise, technical understanding of AI risk issues, I have some startup equity to sell you.

          • Whatever Happened To Anonymous says:

            >If you think part of the reason is a precise, technical understanding of AI risk issues

            Whatever happened to “Actual, unspecifically evidence backed, belief that AI risk is an Important Issue that we should pay significant attention to”?

            So it would be something like:

            “I believe AI risk is a thing and the government should get involved (for whatever reason), I will use ludicrously overinflated figures to signal confidence on my position and maybe set up an anchor”.

  67. Protest Manager says:

    Nonconservative whites show a preference for black politicians over otherwise-identical white politicians in matched-“resume” studies

    It’s an interesting claim, and an interesting study. But why was time on a study when you can check out the real world?

    What % of Democrats in Congress are black, and elected in non “majority minority” districts (including Senators)? Any?

    How about for Republicans? Mia Love and Tim Scott come to mind for Republicans. Any Democrats, at all?

  68. Albatross says:

    Late to the party, I blame work.

    On Argentina, I think the relevant stat is that the US grew enough grain to feed the entire world in the mid 1800s but sadly logistical concerns like shipping costs and refrigeration continue to cause starvation.

    I just read an article on NextDraft about a guy who sunk $8 million into saving thousands of refugees and he hawls them to the nearest port. Ideally, we’d send ships and bring 100,000 to Detroit. But… money. This is why nonprofits say to send $. We could re-settle them on a moon base if given enough cash.

    Membership in the UN should be contingent on accepting 1/(UN members) of the world’s refugees. Divided over the US, EU, China the worlds record 50 million refugees are a fun triffle. Foisted on Italy they are burden.

    • suntzuanime says:

      So China is on the hook for accepting a quarter million refugees, and so is Monaco? I don’t know that this scheme really works out how you want it. I do see some benefits though; if a refugee knows that they only have a 1/4 chance of ending up in a developed nation, instead of getting to pick the richest nation they can plausibly reach, they might decide there’s not as much benefit in being oppressed and we might see the numbers of refugees start to ease up a bit.

  69. leviramsey says:

    Minor quibble: the house purported to be the Governor’s Mansion for Massachusetts is the private residence of Deval Patrick. Massachusetts doesn’t have a Governor’s Mansion.

  70. Joe Katzman says:

    “Nonconservative whites show a preference for black politicians over otherwise-identical white politicians in matched-“resume” studies, leading to greater willingness to vote for them, donate to them, and volunteer for them. I don’t think the paper looked at conservative whites, and I’m curious what they would have found if they did.”

    The same.

    Dr. Ben Carson, who has zero political experience, is 2nd in the GOP race right now and growing. Also, old joke:

    “What do you call a black person at a GOP event?”
    – “The keynote speaker.”

    The fact that people made this joke answers the question.

    This isn’t a good tendency, though… Dr. Carson is one of the best human beings I’ve ever seen run for office, but real life doesn’t come with the same scriptwriters that work in American movies and TV. Dr. Carson vs. Putin? Xi? Uh, are you freakin’ kidding me?

    With that said, after hearing his stories, if his *mom* wanted to run, we should probably give her serious consideration.

  71. norm says:

    If you can’t see it, Scott Adams is pushing Trump hype (and his own book) in the guise of Deep Thought.

    Why 98%? Because it sounds fucking great and you have that 2% to blame if your shilling falls flat.