Not Even A Real Links Post, Just A Blatant Ad

[EDIT: This is over now. Donating now will be totally useless aside from helping make the world a better place.]

There’s a charity thing going on today where whoever gets the most unique donors of $10 or more by midnight tonight wins $250,000 a bundle of useful services supposedly valued at $250,000.

MIRI, the Machine Intelligence Research Institute, is a group that works on ensuring a positive singularity and friendly AI. They were part-founded by Eliezer Yudkowsky, they’re closely affiliated with Less Wrong, and they briefly employed me). They are currently thirteen only eight donors away from the top place, with only four hours left to go (I think they’re on California time).

If you are concerned about the far future, please consider giving them a quick $10 donation at this link to tip them over the edge.

EDIT1: Someone claimed anonymous donations don’t count, so you might want to donate under your real name

EDIT2: Or if you’re interested in eye care for poor Indian children, you can donate to the current leader, Sankara Eye Foundation. Everyone else seems too far behind to catch up

This entry was posted in Uncategorized. Bookmark the permalink.

21 Responses to Not Even A Real Links Post, Just A Blatant Ad

  1. Vulture says:

    Robots vs. Indian children! I would watch that movie.

    (EDIT: In other news, MIRI only needs 8 more unique donors to win the $250,000! Come on people!)

    EDIT 2: Okay now MIRI’s ahead. God damn this is a massacre. Still, donate!

  2. Luke says:

    Thanks, Scott.

    Important clarification: it’s $250k of *in-kind* donations from Microsoft, not cash. So it’s probably worth a lot less than $250k in cash, but still worth winning if we can!

    • Scott Alexander says:

      While you’re here – I was talking to some friends today, and we worried that having MIRI win nearly all of the twenty-four hour timeslots was kind of embarrassing, in a public-relations type way. If you win the grand prize and end up with lots and lots of money, consider trading off some money for potential nationwide positive publicity by just saying you’ll let those slots (some of those slots?) go to the second-place winner rather than run a sweep.

      • Luke says:

        We’ve been discussing the PR issues all day, yes. Anyway, thanks for the idea. It’s tricky.

        • Scott Alexander says:

          I figured you’d already be on it, but just wanted to make sure. Anyway, it is a problem I congratulate you for having.

  3. Luke says:

    Oh, also, I didn’t even notice this the first time through, but it’s total unique *donors*, not total unique *donations*, that matters.

    • Scott Alexander says:

      I figured “unique” already meant “by different people”, but changed for clarity.

  4. Pingback: Not even a real links post, just a blatant ad | Benjamin Ross Hoffman's personal blog

  5. rationalnoodles says:

    Donated $10 for HPMoR.

  6. Rob says:

    Everyone else seems too far behind to catch up? Seriously?

    …Yeah, that works. (n=1)

    • Roxolan says:

      Since your post followed two spam posts, I was paranoid for a second. Now I wonder if a spam-bot that’s just taking a random sentence out of a blog post and adding a question mark and an expression of aggressive disbelief wouldn’t do wonders to go through human and robot filters alike. Might even spark genuine arguments, as aggressive disbelief tends to do.

      …You are a human, right?

  7. Deiseach says:

    I was confused at first about the “most unique” donors bit; I thought it meant something along the lines of “Can you persuade Pope Francis to bung us a tenner?”

    Apparently, what they mean is “individual donors, not part of a group or business or other collective donation”.

    And this one more reason why I think, before aiming for The Singularity, we should all be very careful about our definitions and that we’re all singing off the same hymn-sheet (as it were).

  8. Kyle Blake says:

    You have an unmatched closing parenthesis: “briefly employed me).”

  9. matt says:

    This might not be the right place for this question, but what is the point of MIRI? It’s cool that they want to think about strong AI, but what reason do they have to think that, if strong AI were to come about, that we would have any control over it? Browsing their publications, they mostly strike me as the sort of logically-founded armchair reasoning that made up the body of academic literature up through the 80s, but which was abandoned because every such system breaks spectacularly when you actually tried to build them.

    I’ve long been puzzled by how little overlap there is between the academic research community and the sort of people who staff places like MIRI. I think this is part of my confusion about these people: they are obviously very smart, but they seem to have zero awareness of the failure of decades of research in areas like natural language processing (of which I am a part).

    • Paul Torek says:

      Short answer: according to MIRI, in most scenarios we wouldn’t have any control over the AI. But in a few, where it’s done right, we would. Which is exactly why the issue is so damn important.

    • Iceman says:

      Why do you believe that they have no awareness about the AI winter? Informally, Eliezer often alluded to various GOFAI failures in the Sequences (The Detached Lever Fallacy and Truly Part of You come immediately to mind, but I bet that there are better examples).

    • Nisan says:

      I can’t speak for MIRI, but I’ve worked with them. They do use logic, but a lot of their work combines logic with probability; see for example the paper on “Probabilistic Logic” on their research page. This approach to probabilistic logic, and especially the workaround to Tarski’s theorem, were not available to the proponents of the old-fashioned logical approach to AI.

      Also, real mathematicians have engaged with MIRI’s work, including John Baez.

    • Matt: Luke recently wrote up an explanation of MIRI’s rationale here. MIRI researchers aren’t extremely confident that AGI can be made safe, but they generally think it’s easier to make AGI safer than to completely prevent the invention of AGI; so the former is what they work on.

      MIRI’s interest in mathematical logic has less to do with GOFAI and more to do with its mission of increasing the likelihood of high-assurance AGI. The ‘high-assurance’ part means that a system has to be transparent enough to be assessed by humans, formally verified, etc. The ‘AGI’ part means that we’re doing basic research on a projected technology, not an existing one, so safety work at this stage has to be relatively abstract and theoretical in character.