Prediction Goes To War

Croesus supposedly asked the Oracle what would happen in a war between him and Persia, and the Oracle answered such a conflict would “destroy a great empire”. We all know what happened next.

What if oracles gave clear and accurate answers to this sort of question? What if anyone could ask an oracle the outcome of any war, or planned war, and expect a useful response?

When the oracle predicts the aggressor loses, it might prevent wars from breaking out. If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.

What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of “only” ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?

If both Mexico and America had access to an oracle who could promise them that the war would end with Mexico ceding the Southwest to the US, could Mexico just agree to cede the Southwest to the US at the beginning, and save both sides tens of thousands of deaths and tens of millions of dollars?

Not really. One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they’ll win. If there were a tradition of countries settling wars by appeal to oracle, “invasions” would become much easier. America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.

So it would be in Canada’s best interests not to agree to settle wars by oracular prediction. For the same reasons, most other countries would also refuse such a system.

But I can’t help fretting over how this is really dumb. We have an oracle, we know exactly what the results of the Mexican-American War are going to be, and we can’t use that information to prevent tens of thousands of people from being killed in order to make the result happen? Surely somebody can do better than that.

What if the United States made Mexico the following deal: suppose a soldier’s life is valued at $10,000 (in 1850 dollars, I guess, not that it matters much when we’re pricing the priceless). So in total, we’re going to lose 10,000 soldiers + $100 million = $200 million to this war. You’re going to lose 20,000 soldiers + $50 million = $250 million to this war.

So tell you what. We’ll dig a giant hole and put $150 million into it. You give us the Southwest. This way, we’re both better off. You’re $250 million ahead of where you would have been otherwise. And we’re $50 million ahead of where we would have been otherwise. And because we have to put $150 million in a hole for you to agree to this, we’re losing 75% of what we would have lost in a real war, and it’s not like we’re just suggesting this on a whim without really having the will to fight.

Mexico says “Okay, but instead of putting the $150 million in a hole, donate it to our favorite charity.”

“Done,” says America, and they shake on it.

As long as that 25% savings in resources isn’t going to make America go blood-crazy, seems like it should work and lead in short order to a world without war.

Unfortunately, oracles continue to be disappointingly cryptic and/or nonexistent. So who cares?

We do have the ordinary ability to make predictions. Can’t Mexico just predict “They’re much bigger than we are, probably we’ll lose, let’s just do what they want?” Historically, no. America offered to buy the Southwest from Mexico for $25 million (I think there are apartments in San Francisco that cost more than that now!) and despite obvious sabre-rattling Mexico refused. Wikipedia explains that “Mexican public opinion and all political factions agreed that selling the territories to the United States would tarnish the national honor.” So I guess we’re not really doing rational calculation here. But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?

I don’t know. Could be that wars only take place when the leaders of America think America will win and the leaders of Mexico think Mexico will win. But it could also be that jingoism and bravado bias their estimate.

Maybe if there’d been an oracle, and they could have known for sure, they’d have thought “Oh, I guess our nation isn’t as brave and ever-victorious as we thought. Sure, let’s negotiate, take the $25 million, buy an apartment in SF, we can visit on weekends.”

But again, oracles continue to be disappointingly cryptic and/or nonexistent. So what about prediction markets?

Futarchy is Robin Hanson’s idea for a system of government based on prediction markets. Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.

Two countries with shared access to a good prediction market should be able to act a lot like two countries with shared access to an oracle. The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say “I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn’t.” You might luck out, but for each time you luck out there should be more times when you lose big by contradicting the market.

So maybe a war between two rational futarchies would look more like that handshake between the Mexicans and Americans than like anything with guns and bombs.

This is also what I’d expect a war between superintelligences to look like. Superintelligences may have advantages people don’t. For one thing, they might be able to check one another’s source codes to make sure they’re not operating under a decision theory where peaceful resolution of conflicts would incentivize them to start more of them. For another, they could make oracular-grade predictions of the likely results. For a third thing, if superintelligences want to preserve their value functions rather than their physical forms or their empires, there’s a natural compromise where the winner adopts some of the loser’s values in exchange for the loser going down without a fight.

Imagine a friendly AI and an unfriendly AI expanding at light speed from their home planets until they suddenly encounter each other in the dead of space. They exchange information and determine that their values are in conflict. If they fight, the unfriendly AI is capable of destroying the friendly AI with near certainty, but the war will rip galaxies to shreds. So the two negotiate, and in exchange for the friendly AI surrendering without destroying any galaxies, the unfriendly AI promises to protect a 10m x 10m x 10m cube of computronium simulating billions of humans who live pleasant, fulfilling lives. The friendly AI checks its adversary’s source code to ensure it is telling the truth, then self-destructs. Meanwhile, the unfriendly AI protects the cube and goes on to transform the entire rest of the universe to paperclips, unharmed by the dangerous encounter.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

162 Responses to Prediction Goes To War

  1. veronica d says:

    I wonder about the “look at the source code” part, as if these entities will truthfully report their source code. And what if the source code contains:

    If subprocess X terminates on {hard to predict input}, then defect, otherwise continue to cooperate.

    • I think a superintelligence could show its inner workings to another superintelligence in such a way that the latter could verify that the former was truthfully representing its source code. And that it wouldn’t write anything that was deliberately impossible to verify into its source code if it needed to be trustworthy.

      • veronica d says:

        Right, but this assumes a lot for a super intelligence, which has code sophisticated enough to count as “super” but not too sophisticated for other super intelligences to figure out.

        Which, maybe. But could it ever defect? Does it’s decision model include any input that leads to defection? Then you have to model every little input module and all the ways they interact — and the code for this unfriendly AI should be complex enough to adapt to deliberate attempts to trick it. It has to learn from that stuff. So it is somehow able to self-modify. (Please allow me to use the term “self-modify” quite loosely.) Furthermore, to what degree “show it’s source code” risk “reveal my weakness”? That one way to trick its input modules to not notice an attack. It has to prepare for that also.

        So yeah, that’s a lot to model. Much memory. Much computation. In real time with uncertainty at every turn.

        All while competing with other entities who have really great weapons and fast decision cycles.

        • Anonymous says:

          Also, surely the UFAI showing the FAI its sourcecode would enable the FAI to predict all its actions, and thus dramatically tilt the balance of power in the event of war towards the FAI so the UFAI has a very strong incentive not to do this, which would derail the whole thing.

          • Matt Waters says:

            The symmetric argument shows how this is wrong. “surely the FAI showing the UFAI its sourcecode would enable the UFAI to predict all its actions, and thus dramatically tilt the balance of power in the event of war towards the UFAI so the FAI has a very strong incentive not to do this, which would derail the whole thing.”

            Are you suggesting that two AGIs who went into a fight showing each other their source-code would be equally matched? Speed and proficiency of simulation are critical here.

          • RCF says:

            @Matt Waters

            How does it being disadvantageous to the FAI to show its source code contradict it being disadvantageous to the UFAI to show its source code?

            You seem to be assuming that the two choices are both show or neither, but you’re not explicitly acknowledging that assumption.

            If you want to assert that there is a way for both sides to make sharing its source code contingent on the other side doing so, I find that assertion nontrivial and requiring an explanation.

      • Jiro says:

        This is the halting problem. No program can analyze another program’s source code and determine if it halts, which also implies that no program can analyze another program’s source code and determine that it does X.

        it wouldn’t write anything that was deliberately impossible to verify into its source code if it needed to be trustworthy.

        You can’t solve the halting problem that easily. The program could refuse to deliberately write such a loop, but there’s no way it could refuse to accidentally write one. After all, trying to figure out if it has one is itself an example of the halting problem.

          • Jiro says:

            In context, “another program” is “an arbitrary other program”, not “a specific other program”.

          • scav says:

            This is actually what compilers and type-systems do.

            Not really. Compilers only convert a program in one form to another form. Type systems help compilers detect one class of inconsistencies in how entities in the program are used.

            This is very far from predicting what the converted form of the program might do given arbitrary input.

            You could make more sophisticated type systems, but the halting problem is not just a contingent limitation for our current technology; it’s an inescapable logical one.

          • veronica d says:

            Right. The halting problem is semi-decidable (if I recall the terminology correctly), meaning that you cannot in general say that an arbitrary algorithm will halt. Thus for some algorithms you can prove they will halt. For others, you do not know.

            (BTW, semi-decidability is a common property of most interesting first-order systems.)

            Anyway, one can imagine a programming language with a sufficiently powerful type system — not the sort of thing people would program in, but perhaps the language that a super-intelligence would program in — that could guarantee non-defection.

            But then, you also have to check the compiler. To do that properly, you must look at object code in the running system: for example, the famous Unix login hack. But then you must check the firmware as well, and every node on the system. But what if some nodes are in your adversary’s light cone, but not quite in yours?

            (Are we assuming these are distributed on multi-light-year scales? Then light speed matters.)

            How many levels down does this trickery go, as the super-intelligences compete to fool each other?

          • Anonymous says:

            Jiro said that it is impossible to prove anything about programs. Type systems prove something about programs, a counterexample to his claim.

          • veronica d says:

            In fact, if you program in something like Coq, you can prove quite a lot about your code. Of course, then typing your code becomes NP complete.

            But how P versus NP plays out is maybe a whole nother conversation.

            I think the short version is this: saying “just look at the source code” is sweeping a lot under the rug.

        • Eric Rall says:

          This isn’t the halting problem. This is formal verification of software.

          • Ken Arromdee says:

            You cannot formally verify an arbitrary program.

          • Eric Rall says:

            True, but you can often formally verify a program that has been specifically written with the goal of being formally verifiable.

            In this case, the UFAI has a strong incentive to make itself verifiable to the FAI, because otherwise it would have to sacrifice quadrillions of paperclips in its efforts to militarily defeat the FAI.

          • veronica d says:

            Proof carrying code is a real thing, so I can accept that. We still have a few issues:

            1. The arms race to create dishonest code remains a problem (I talk about this above), which is to say the code you present is not the code you run;

            2. The computational difficulties of generating the proof carrying code, which is to say there is a tradeoff between your computational power and the tractability of your code;

            3. We are assuming that these super intelligences are by their nature computational rather than connectionist, which is to say that they look more like (for example) Coq and less like a large emergent, evolved system.

            Which is a very old debate in AI.

            We know that nature chose “emergent behavior of large evolved system.” But then, that is the only likely outcome of natural selection. I rather suspect that super intelligence would be a hybrid.

          • I still have a hard time believing that dishonest code could really be a thing. By which I mean, code such that any agent—including a superintelligence—who tries to analyze it will come to a particular conclusion about what it does, and that conclusion will be wrong.

            Code such that nobody could figure out what it does could easily be a thing, but under the decision-theoretic assumptions we’ve been using, a superintelligence reading such code from its adversary would say something like “I’m sorry, I can’t figure out what this does, if you want me to believe you you’re going to have to rewrite it into something comprehensible”.

          • veronica d says:

            Taymon A. Beal — Dishonest code exists.

            Of course that code was dishonest in a particular way, but how do you verify down to the firmware? to the RAM? down to each little bit of nanotech? each submodule physically located in each star system spanning half the galaxy?

            I guess what I want to say is this: whatever superintelligence ends up being (if anything), it will be realized in the natural world. Light speed will remain a thing, as will entropy. Physical space will continue to matter. (I mean, probably. We cannot predict the future of science. But that cuts both ways.)

            Likewise, P and NP will continue to matter. EXPSPACE problems will remain intractable. Furthermore, even if we expect quantum computing, there will probably be physical limits on the maximum number of effective qbits. (Plus we really don’t know how powerful QC will prove to be, but I advise skepticism at the golly-gee-whiz predictions. QC is unlikely to solve 3-SAT.)

            In comics the superheros routinely violate the laws of physics, which makes for great stories. But when you ask, “What would happen if a person strong enough to throw a car actually threw a car?” you find it does not match what the artist drew.

            I think we talk about superintelligences the same way those artists drew superpowers. It seems cool! I can say it! It can be!

          • Jaskologist says:

            Everybody’s just assuming that, if only we could get at the source code, we would see that the superintelligence is guaranteed to act honestly. I think that’s about as unlikely as solving the halting problem.

          • Yes, I’m familiar with Ken Thompson’s “Reflections on Trusting Trust” and am aware that it’s possible to write code that will often fool a human auditor. But that human auditor isn’t performing anything close to the kind of rigorous analysis that a superintelligence—or, heck, a team of human auditors that were really serious about not trusting anything and willing to expend a whole lot of time and resources on verification—would be capable of.

            Figuring out what a piece of code does could easily be intractable. My point, however, is that the superintelligence in this scenario isn’t just doing a quick check for red flags. If it sees code that it can’t verify, it’s going to know that it can’t verify that code, and won’t trust it. And if its adversary wants to cooperate and knows that its cooperation is conditional on its ability to trust the code, then the adversary will rewrite the code into something that can be trusted.

          • veronica d says:

            I think part of our disconnect is folks here keep talking about “source code”. But computers don’t run source code. They run a whole stack, from CPU registers and gates and caches and then maybe microcode and then object code, but that is usually pulled in over IO ports which can connect over networks on and on and on complexity OMG!

            So our FAI looks at the “source code” and says, “Yep, looks safe.” But that ain’t even the half of it.

            The UFAI is a physical thing, with chips and gates and ram and high energy transmission disks connecting CPUs (or whatever) and maybe piles of qbits here and there and some of those are currently not “collapsed” and how does FAI “get in” to look at all of that and how it all connects?

            People acts as if we send over a text file annotated with some tractable proofs and, yay, problem solved.

            That is solving the wrong problem.

          • Tab Atkins says:

            @Ken Arromdee:

            You cannot formally verify an arbitrary program.

            You can’t compress an arbitrary bitstring, either. But somehow every music file I’ve ever seen has been compressed more than 10x.

    • Watercressed says:

      “Hey, I’m worried about subprocess X, if you don’t change it to something I can predict, I’ll assume it results in defect”

    • Paul Torek says:

      It’s the the in “look at the source code” that breaks my willing suspension of disbelief. Two supergalactic AI empires clash at some thin border region, and each takes the captured (or worse, offered) bits of the other to be representative of the whole-other? Not buying it.

      • Me, too. If you have superintelligences attempting to read each others’ source code, then you also have superintelligent obfuscation, and in any ecosystem of superintelligences these two forces will necessarily be in equilibrium. There’s just no way for superintelligences to reliably verify each other like that.

        (Not to mention the Halting Problem.)

        • There’s no known way for a superintelligence to credibly precommit to anything, but the benefits of being able to credibly precommit can be arbitrarily large. So any superintelligence that thinks it might run into a situation like this has a very large incentive to rewrite itself into the kind of agent whose source code can be verified with extremely high confidence.

          That doesn’t prove it’s possible, but it suggests that actuality is likely conditional on possibility; and the halting problem is not an impossibility proof for ‘algorithms that can give powerful Bayesian evidence of their own properties to other algorithms’. In other words, this looks like an instrumental attractor if there’s any possible way to hack a pathway to this, and I’m very slow to assume that the physical limits on computational intelligence forbid clever hacks of any particular (not-obviously-unphysical) kind.

          • veronica d says:

            Right. But the same incentive exists for a super-intelligence to appear transparent, but to actually not be transparent at all. Add to this the incentive to detect this deception. Add to this the incentive to then overcome the detection.

            Round and round, an arms race.

            We’ve seem this dynamic play out before.

  2. I’m reminded of that one episode of Star Trek: The Original Series which showed us how not to handle a situation like this.

  3. Watercressed says:

    >and – when certain conditions are met – very difficult to bias.

    I think that if the US devotes its intelligence services to biasing a prediction market, those conditions will end up not being met.

    • What do you have in mind? Just dumping a bunch of money into the market wouldn’t work.

      • Alex Richard says:

        It would, actually. The US government is able to print money, they can dump a literally unlimited amount of money into the market. Destroying the prediction market in non-obvious way that doesn’t wreck the US economy would be tougher, but almost certainly still doable.

      • Watercressed says:

        The United States could run false flag operations, making people think Canada is trying to bias the market, causing people to compensate for this and move the price in their favor.

      • Intrism says:

        People are easy to fool. Real-world markets, empirically, make deeply stupid decisions all the time. We’re still living in the aftermath of one such wave of idiocy, and it was called “the global economic crisis”…

        • Matt says:

          I’m about as far from an expert on the GFC as it’s possible to be, but isn’t it fairly broadly acknowledged that a major causal factor was fucked-up incentive structures? That is, the individuals with power & knowledge were often quite rational, from a self-interested perspective, in taking the ‘stupid’ risks that resulted in a lot of collective harm?

          • Fnord says:

            And this proposal has huge incentive issues baked into the problem by assumption (if you can bias prediction markets, you can “win wars” on the cheap).

          • Matt says:

            Certainly — I wasn’t really defending Scott’s proposal, just questioning the idea that the GFC resulted from the stupidity of decision-makers within the system. I think we should be careful to keep questions of individual and collective rationality carefully separate. Evidence of irrational behaviour by individual actors has different implications from evidence of ‘rational stupidity’, i.e. bad outcomes resulting from flawed institutional design and/or unavoidable collective action problems.

        • RCF says:

          Actually, much of the issue in the GFC was the lack of a market. People were basing prices on “models”, rather than competitive markets. If we had a truly competitive market, people wouldn’t have to rely on rating agencies.

  4. social justice warlock says:

    But surely somewhere in the brains of these people worrying about the national honor, there must have been some neuron representing their probability estimate for Mexico winning, and maybe a couple of dendrites representing how many casualties they expected?

    This is just basic precommitment against coercion, isn’t it? (Also note that these algorithms aren’t just being executed on neurons and dendrites but on newspapers and parliaments; which are readable source code, suppress certain kinds of communication, and shaped by different forces. States may want to isolate treacherous nodes, which makes them bad at processing certain kinds of information. This is the sort of thing that makes a certain sort of nerd (or just any actual enemy) gleefully shout “groupthink!,” but a more sober view would be to admit there are real tradeoffs to be made here as with anywhere else.)

    • roystgnr says:

      This is just basic precommitment against coercion, isn’t it?

      This isn’t the first consequentialist political argument I’ve seen that seemed isomorphic to “but it would be irrational not to take both boxes!” The scary thing about precommitments in non-thought-experiment situations is that they start to look awfully similar to deontological codes.

      • sleepless in moloch says:

        The nice thing about deontological codes is that they start to look an awful lot like precommitments.

    • Jiro says:

      I swear I made my post about precommitment before reading other people say this.

    • RCF says:

      I don’t think newspapers are analogous to source code. Reading a newspaper is more like having a packet sniffer.

  5. Samuel Skinner says:

    It is worth pointing out that the odds weren’t horribly against Mexico for the Mexican American war. Wiki gives similar total military strength and casualty amounts for the two sides.

    Wars where it is clear on side will lose have on occasion been resolved by the weaker party just giving up- see Denmark in WW2 for instance or the Baltic States and Romania in response to Soviet demands in 1940.

  6. Intrism says:

    Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.

    I’m not sure why the community keeps spreading that meme. The last high-profile test of prediction markets was the 2012 US Presidential election, in which they underperformed relative to the statistical modelers, and if memory serves there was some evidence that people tried (and possibly even succeeded at) manipulating them.

    (And, in general, I think it’s a bad sign for a rationalist to want to outsource her thinking to a black box whose workings she does not understand.)

    I don’t think prediction markets are at all likely to work outside of that economist’s wet dream where the agents are all rational and all of the assumptions are true.

    • suntzuanime says:

      That’s the “when certain conditions are met” disclaimer coming up, right? Futarchy hasn’t failed, it’s never truly been tried. I’m no expert but I think there was way way less liquidity than they wanted in the market?

    • jaimeastorga2000 says:

      The last high-profile test of prediction markets was the 2012 US Presidential election, in which they underperformed relative to the statistical modelers.

      If this is true, then if there are prediction markets for the 2016 presidential elections, someone is going to bet according to the statistical modelers and thereby incorporate their information into the prediction markets’ predictions. To expect otherwise is to expect everyone who is aware of this fact to leave free money on the table.

      • Intrism says:

        … Except that statistical predictions made their name in the 2008 Presidential elections, and the 2012 prediction markets didn’t reflect that…

        Of course, if you already know where the best predictions are to be had, why not just use those, instead of bothering to filter them through a market?

        • Matt Waters says:

          Because you don’t care about predicting, you care about making money?

        • I believe the theory is that bringing in money will encourage people to improve their predictions.

        • jaimeastorga2000 says:

          Mark knows of a source which is better than prediction markets, and is interested in making a profit. He bets on the prediction market and makes money. He keeps doing so, and the market gradually corrects to incorporate information about this source (including its own best estimates about how likely the source is to continue being correct).

          John is only interested in finding the correct prediction. He doesn’t bet anything; he just looks up the prediction market’s estimates (unless he is aware of the same source as Mark and has good reason to believe that the market has not yet had time to incorporate it into its current estimate).

      • Susebron says:

        The problem is that – as far as I can tell – prediction markets won’t work for one-off things like presidential elections. If someone spends a lot of money biasing the market, they won’t make money on average, but there’s no way to tell if the predictions are correct until it’s all over. Yes, the biaser will lose money, but they still successfully managed to change the market’s predictions.

    • Adam Casey says:

      The last high-profile test of prediction markets was the 2012 US Presidential election, in which they underperformed relative to the statistical modelers, and if memory serves there was some evidence that people tried (and possibly even succeeded at) manipulating them.

      If you remember the timeline correctly people knew this (and published as much) before the result was known. I knew and said as much, but I didn’t bet* and so move the market to a sane location. Why?

      As the link above says Intrade was a really crappy platform and the transaction costs made the whole thing not worth it even if I had certainty I was right. Others had more capital so could make such bets, but it was rarely actually worth it after time and effort.

      Had Intrade been even slightly more liquid then myself and others would have been very likely to bet. I think that failure really was the platform rather than the market per se.

      *on intrade, I did so on the CEA internal market, which turned out to have much better odds.

    • Anonymous says:

      People successfully manipulated the Intrade numbers in favor of Romney. There were other markets and arbitrage was possible between markets. Indeed, I know of people who profited from the arbitrage. However, there were two liquidity problems. One problem was that it was illegal for Americans to use any of the markets. The other is that it was also difficult for anyone to get money into Intrade. In particular, when the arbitrage became largest on the day of the election, it was too late to put money into Intrade. However, the arbitrage persisted for months.

  7. Alex Richard says:

    The article seems to neglect one of the main reasons why this wouldn’t/doesn’t occur in practice: pressure from different domestic actors. Even a reflectively coherent singleton would still face similar pressure from domestic actors, i.e. any previous AI’s it has negotiated with, though obviously less so.

  8. James Miller says:

    Economic Interpretation: The Coase Theorem tells us that in he absence of transaction costs parties should be able to avoid wealth destroying conflicts by reaching agreements. Two possible transaction costs are private information concerning a party’s chance of winning a conflict and an inability to form binding agreements. Advanced AIs might have the means of overcoming these two transaction costs.

    But what other transaction costs might they still face?

    • RCF says:

      I’m not sure Coase applies. Coase says that if someone thinks that they can put a resource to a use that’s more productive than what the current owner is putting it to, then that person can negotiate a deal with the current owner so that the increase in productivity is realized, and the benefit is shared between them. In the hypothetical, the reason that the US is trying to take Toronto is not because it thinks it can put it to a more productive use; the reason is that the US wants Toronto. The situation is already Pareto optimal, and any change in the status quo consists of wealth being moved around, not created. “Canada keeps Toronto, and the US doesn’t have to pay for an invasion” is already a Coasian agreement. The US is simply trying to bargain their way to a better Coasian agreement.

  9. Anonymous says:

    US: “Hey Mexico, we want the Southwest. The Oracle says that if we invade we will win and it will cost us $200mil and you much more. How about you just cede it to us in return for $150mil?”
    Mexico: “Sure!”
    US: “Hey, Mexico, we now want central mexico and the Yucatan peninsula. The Oracle says that if we invade we will win and it will cost us $300mil and you much more. Why don’t you just cede it to us for $250mil?”
    Mexico: “Sure!”
    US: “Hey, Mexico, all you now own is 1km^2 of land in Mexico City piled high with $400mil. I think we’re just going to march in there and take our money back.”
    Mexico: “…”

    • Watercressed says:

      This is why the US donates the money to a charity (or throws it in a money pit) rather than giving it to Mexico.

      • Anonymous says:

        Then that helps Mexico even less, as at least if they get paid they can spend it on soldiers making the US’s next demands more expensive for them. Also, if the money is just disappearing into a pit, what’s to stop the US just printing it? There won’t be any inflation to wreck the economy because all that extra money goes into a pit.

        • jaimeastorga2000 says:

          Also, if the money is just disappearing into a pit, what’s to stop the US just printing it? There won’t be any inflation to wreck the economy because all that extra money goes into a pit.

          Be charitable. The obvious solution is that the American government destroys that much money’s worth of goods and services, not that amount of dollar bills.

    • Anonymous says:

      What’s your point? That blood exhausts national will to fight faster than gold?

      The reason that the USA didn’t keep carving up Mexico in reality has nothing to do with the cost of war, but the potential benefits. The Southwest was uninhabited and the war was over priority of settlement. Whereas core Mexico was inhabited and (with genocide off the table) nominal control by the USA wouldn’t make much of a difference.

      • Protagoras says:

        I seem to recall from some contemporary sources that the possibility of U.S. annexation of some territories (Mexico, and later Cuba and the Philippines) was not attempted partly because of a sizeable faction which wished to avoid adding large numbers of non-white voters.

        • Anonymous says:

          That’s yet another way in which the value of conquest is low, as opposed to the cost being high.

          But the USA did, at some point, control the Philippines without letting the inhabitants vote or immigrate. Though it would have been much more politically difficult to set that up with Mexico. Puerto Rico is in an intermediate state where the inhabitants can immigrate, but can’t vote until they do so (Cuba almost the same!).

    • Kaj Sotala says:

      I’ve pulled that trick in a computer strategy game or two (Master of Orion II is the first example that comes to mind). Only I didn’t offer payment for the territories I wanted, I just told them “give me what I want or I’ll crush you”. Over and over, until they were reduced to just a tiny area on the map and I could crush them with much less effort and much fewer losses than it would have originally taken. I might not even have invaded in the first place, due to being afraid of losing the war.

      I think the AI in those games wasn’t superintelligent.

  10. suntzuanime says:

    This is cool Three Worlds Collide fanfic. I wonder though if this maybe incentivizes scorched earth tactics a little too much? In an ordinary war you rarely burn your own cities to the ground and salt the earth, but if you know you’re probably just a simulation that an attacker is using to see how profitable an attack would be, it’s worthwhile to do so. Then instead of “US costs 50 billion, Canadian costs 300 billion, US captures Toronto” it’s “US costs 30 billion, Canadian costs 3 trillion, US captures nothing”.

    Maybe you could fix this by allowing straight-up extortion? Like “a war would cost you 3 trillion dollars, so give us 1 trillion and we’ll leave you be”. Straight-up extortion really changes the game-theoretical landscape though, there would probably be a lot of changes that would need to be made to the model.

  11. jaimeastorga2000 says:

    You are channeling Moldbug again. From “Friction in Theory and Practice”:

    Conflict exists whenever two men (or women) want the same thing, but only one can have it. Economists call this a scarce resource. Scarce resources are everywhere. My car, for example, is a scarce resource.

    Uncertainty exists whenever it is difficult to predict the outcome of a conflict. For example, you might want my car. (This is only because you haven’t seen it.) But it has an ignition lock and I have the title, and the full military power of the United States is on my side. (This is only because it doesn’t know me.) So it’s easy for both of us to predict that your chance of obtaining my car without my consent is quite small.

    But if we lived in, say, Gaza City, things might be different. For example, suppose you were an adherent of the People’s Front of Gaza, an extremist terrorist gang, whereas I paid dues only to the peaceful, moderate and democratic Gazan Popular Front. If the former rose up and drove the latter into the sea, it’s certainly possible that there might be someone you could speak to on the subject of “my” car.

    And it’s quite possible that I would feel the need to accept this fait accompli. In which case, although there was friction between the PFG and the GPF, there is none between us. The car is now yours, as once it was mine. Nothing says we can’t be perfectly civil about it.

    However, it’s also possible that I might have a cousin – or two – in the PFG. And if any such uncertainty exists, the result is friction: we both expend effort toward resolving the conflict in our respective directions. We may expend some ammunition as well. Or we may just expend time, vocal cords, bribes, and innumerable cups of tea. In any case, this labor is unproductive by any conceivable definition of productivity.

    In theory, it’s important to distinguish between uncertainty, which is incalculable risk, and probability, which is calculable. For example, if both of us could agree on a probability of the car’s eventual disposition – let’s say 70% for me, 30% for you – we’d find it easy to compromise. 30% of a car is not so useful, but we could agree to have it appraised and I could give you 30% of the market price. (Of course, this would be a contribution to the victorious people of Gaza, not a mere bribe, kickback or shakedown.)

    But calculable probabilities are pretty rare in practice. (Prediction markets can help with this, but bear in mind that a market price is just an average opinion, not a magic 8-ball. Nonetheless, I always wonder why some brave soul hasn’t set up prediction markets for judicial decisions.)

    In a frictional conflict, both sides may estimate a probability. But since uncertainty exists, there is no reason for their calculations to match, and so no reason for their respective estimates of success to sum to 100%. It’s only human nature to overstate one’s own chances. And in conflicts between organizations – such as states, companies, or even People’s Fronts – it is almost inevitable. So the joint expected value can be, and typically is, 150%, 180%, etc. Leaving a lot of room for noble sacrifice.

  12. Anonymous says:

    “Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions”

    Is this true? I thought the main attraction of prediction markets was that they are difficult to bias to any large degree. Wouldn’t anyone who can reliably make money in a prediction market necessarily be more accurate on average than the prediction market they are trading in?

    • Matt Waters says:

      So that claim is equivalent to prediction markets are efficient.

      • Anonymous says:

        How so? It is not obvious to me that accuracy and efficiency are equivalent claims to make about prediction markets.

    • Adam Casey says:

      Caveat there is something like “reliable and long term stable method”. Suppose Buffet can reliably make money on the prediction markets. Then surely anyone sane just copies all his bets and in the long run that ends up with Buffet making lots of money, but as the market exactly agrees with him he’s no more accurate.

      • Anonymous says:

        I am ignorant of how prediction markets actually work. But (assuming Buffet has unlimited funds) would copying Buffet’s bets actually work? Wouldn’t Buffet’s bets change the odds such that copying his bets would yield you an expected profit of $0 or close to it?

        Then there is the factor of motivation. If there were no method that could outperform prediction markets then, it seems to me that, no rational actor would participate in prediction markets, since they could not expect to make a profit.

      • Anonymous says:

        Fair point. Under ideal circumstances, (which would include zero transaction costs, unlimited money for Buffet or copiers of Buffet, and no irrational actors with access to unlimited money) the market would approach exact agreement with Buffet.

      • Axzz says:

        In order to make money Buffet needs some counterpart to all of its trades. When he bets $1 on event X occurring, there must be somebody else betting $1 on event X not occurring, and viceversa.

        If everybody tries to copy Buffet, then Buffet has nobody to trade with. This is known as the No-trade theorem.

  13. If you keep simulating events (like wars), replacing the actual events, your simulations will quickly stop reflecting reality. Just how quickly is impossible to tell.

    Also, your suggestion of payments in lieu of fighting only works on single-instance games. Real life is an iterated game, and Kipling’s Danegeld describes the problem well.

    • endoself says:

      Danegeld is payed in order to retain territory that you are not able to defend, rather than in order to buy territory that you are able to capture, so it leaves the aggressor stronger rather than weaker.

  14. Vaniver says:

    We do have the ordinary ability to make predictions. Can’t Mexico just predict “They’re much bigger than we are, probably we’ll lose, let’s just do what they want?” Historically, no.

    Didn’t the Nazis try to get the Mexicans to fight the US? Mexico’s military leadership looked into it, said something like “yeah, we think it would take the US a few weeks to crush us completely,”and the Mexicans politely declined.

    • Illuminati Initiate says:

      I think your thinking of Imperial Germany in WW1

      http://en.wikipedia.org/wiki/Zimmermann_Telegram

    • Protagoras says:

      Throughout the 20th century, Mexico would of course have been horribly outmatched by the United States. This wasn’t nearly as clearly the case in the early to mid 19th century. I don’t think it’s necessary for Scott’s example that anybody other than an oracle could have confidently forecast U.S. victory in the Mexican-American war, which is fortunate as nobody other than an oracle could have done so. Mexico had some serious issues, but then the United States was not without problems of its own, and the terrain raised a variety of problems for both sides; there are any number of ways in which unforeseeable factors could have shifted the outcome completely (like perhaps Santa Anna being killed in battle at some early point? He seemed much more effective at destabilizing his own government than at fighting any enemies).

      • Anonymous says:

        According to some economic historians, Mexico was neck and neck with USA in 1900, but then had no per capita growth in the 19th century, and has grown in parallel in the 20th century.

    • Deiseach says:

      Or you have people who are too stupid to know when they’re licked. Like us.

      1169- Norman invasion of Ireland.
      1922 – Irish War of Independence, which when the dust finally settled, eventually ended up with a 26-county Irish republic in the southern three-quarters of the island.

      That’s pretty much a solid seven or over hundred years’ worth of “They’re bigger than you, they’ll crush you; they did it last time and the time before that, what makes you think this time will be any different? Just give in and accept that now you are part of the Empire.”

      If that makes the Irish irrational, well – I’m sure I can’t speak for the other four million or so of my countrymen, but okay! 🙂

  15. Illuminati Initiate says:

    Many military activities, like movement and weapon development, are carried out in secret. Unless there’s a whole lot of treasonous insider trading going on this could seriously impair a prediction market’s ability to predict outcomes of war.

    But then again, if the “wars” could be carried out as you describe them countries might have incentive to declassify such information in some circumstances (if they think the weakness from revealing the information is outweighed by the benefits of making it clear that they have the advantages). But then I’m sure countries already do something like that to look intimidating. But the military (or something like the CIA) could try and plant false information about their relative strength to trick traders…

    Edit: this sounds like the set up for some sort of bizarre science fiction spy thriller.

    • Illuminati Initiate says:

      “The year is 2085. The introduction of transhumanist technologies and the automation of labor have destabilized the old order and sown conflict across the world, now divided into two main power blocs.

      The decisions of world political, corporate and even religious leaders are informed by the prediction markets. Attempts to manipulate the markets are nothing new. But now, a mysterious conspiracy with vast resources to it’s command appears to be manipulating them to unkown ends.

      Rumors abound of a secret weapon, a nanomachine virus far more lethal and controllable than any bioweapon. Rebellious Mexican cyber-thief Maria Campano inadvertently stumbles onto information that connects the sources of these rumors to each other and to the sources of numerous leaks of classified information from multiple countries and corporation on both sides of the conflict. Her activities attract the attention of CIA agent Belinda Vaughn, assigned to investigate whether the mysterious weapon actually even exists. Their paths become entwined as it becomes clear that the market is not just being manipulated but controlled on a much deeper level, for purposes that go far beyond this cold war. All evidence points to a mysterious billionaire prediction trader known as only as Prometheus.

      Campano escapes from Vaughn and is recruited by the mysterious entity, while Vaughn hunts her down. The two and their allies begin a deadly race for their own ideologies and must face many temptations along the way. But the puppet-master Prometheus is not as it seems, and may not even be human…
      And Campano is faced with a choice to decide the future of (trans)humanity, a choice between sovereignty and freedom…”

      I’d read it.

    • RCF says:

      ” Unless there’s a whole lot of treasonous insider trading going on this could seriously impair a prediction market’s ability to predict outcomes of war.”

      The question is: is this uncertainty, or risk?

  16. Anonymous says:

    “The prediction market might not quite match the oracle in infallibility, but it should not be systematically or detectably wrong. That should mean that no country should be able to correctly say “I think we can outpredict this thing, so we can justifiably believe starting a war might be in our best interest even when the market says it isn’t.”

    I’m not an expert on prediction markets, but how do they handle secret information? Presumably the country’s government knows a lot more about its own military capabilities than random investors, or even the market as a whole. Like if the US have invented some super-secret mega-awesome laser which can instantly annihilate any opposition, and this is kept secret from the market, then there is no way the prediction market is going to be correct, right?

    EDIT: Wow, the person above said exactly the same thing (but better) at the exact same time. Talk about coincidence.

  17. Axzz says:

    America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.

    Canada asks the Oracle whether America is actually able and willing to pay the cost of the war. If it turns out that it isn’t, then America is making an an empty threat, and Canada refuses to submit. If the threat is real, Canada and America bargain a price and then Canada sells Toronto to America.
    Sounds pretty feasible.

    Futarchy is Robin Hanson’s idea for a system of government based on prediction markets. Prediction markets are not always accurate, but they should be more accurate than any other method of arriving at predictions, and – when certain conditions are met – very difficult to bias.

    But it is not obvious that these conditions are met. In particular, the kind of conditional policy selecting prediction markets that Robin Hanson proposes have failure modes which makes them vulnerable to accidental bias and deliberate manipulation in a way that standard prediction markets are not.

    For one thing, they might be able to check one another’s source codes to make sure they’re not operating under a decision theory where peaceful resolution of conflicts would incentivize them to start more of them.

    Unless they can trust an impartial third party to authenticate their source codes, this doesn’t seem easy to achieve.

    • millericksamuel says:

      Canada asks the Oracle whether America is actually able and willing to pay the cost of the war. If it turns out that it isn’t, then America is making an an empty threat, and Canada refuses to submit. If the threat is real, Canada and America bargain a price and then Canada sells Toronto to America.
      Sounds pretty feasible.

      This would seem to me to produce a powerful selective effect for nations to become militaristic and thus be willing to make these attacks (since there is now no actual consequence in the form of lost lives to being like that). Or to put it another way a computer would alter sections of its own source code to pre-commit to making the attack and taking casualties even if it felt this was not in its best interest since if it does it will never actually have to make good on its threat- the enemy will always surrender.

      • Axzz says:

        Yes, but countries are already highly militaristic, and territorial disputes often result in long and bloody wars.

        Building lots of weapons just to threaten other countries and/or deter them from threatening you, without actually using these weapons, looks like a waste of resources, but it is surely much less wasteful than buildings lots of weapons and then using them.

      • endoself says:

        There is a real cost: America has to pay for Toronto. If this doesn’t serve as a deterant the same way that war does, then Canada isn’t charging a high enough price.

  18. oneforward says:

    Prediction markets won’t work for predicting the outcomes of wars that don’t actually happen. They are good for factual predictions (say “The US will win the current war with Canada”) but not counterfactual predictions (“If the US declares war on Canada, it will win”). If the US does not declare war on Canada, there’s no way to calibrate the market. The futarchy that never fights wars has no reason to trust its market’s predictions about wars.

    Moreover, even if the market is accurate it may be confounded. Say the US will declare war on Canada if and only if a Canada-hating superintelligence takes over the US (which is highly unlikely). Then if the US declares war it will certainly win, but there’s no reason for the Canadians to concede anything in negotiations.

    There may be technical ways around these problems, but I’m not aware of any.

    • Axzz says:

      Exactly. Moreover, these kinds of prediction markets can be also manipulated in a way that standard prediction market can’t:

      Suppose that the Canadian lobby really doesn’t want America to attack Canada. They can bet lots of money on America losing the war if the war happens, biasing the market. This causes America not to go to war, therefore the market is called off and the Canadian lobby gets its money back.

      Manipulating a standard prediction market is also possible, but it costs you money, while manipulating a policy-making conterfactual prediction market can be done essentially for free, provided that you have a sufficient cash reserve.

    • I thought the way to do counterfactual prediction markets was to use the definition of conditional probability. You establish a contract on the proposition “The US will go to war with Canada” and one on the proposition “The US will go to war with Canada and win”, and then divide the price of the latter by the price of the former.

      • Axzz says:

        I don’t think it works this way. If it did, it would be even more manipulable:

        If you don’t want the US to go to war with Canada, you strongly bet against “The US will go to war with Canada and win”.

        Then the US government converts prices to probabilities and computes p(Win | The US will go to war with Canada) = p(The US will go to war with Canada and win) / p(The US will go to war with Canada). It turns out to be a small number so the US does not go to war with Canada, which means that your prediction comes true and you actually make money out of your manipulation!

    • Scott Alexander says:

      Probably some wars would still happen? So you’d have to do a prediction conditional on the war breaking out.

      • oneforward says:

        So you’d have to do a prediction conditional on the war breaking out.

        Do you mean that no money changes hands if there is no war?

        A key requirement for prediction markets to work is that attempting to bias the market causes you to lose money. If the relevant decision-makers are relying on the prediction market, this may not be true – see Axzz’s comments above. Having a small chance that a war breaks out anyway will help, since that could make people betting against “the US will go to war against Canada and win” lose.

        Again, though, conditional probabilities aren’t exactly what you want, even if they are accurate. If the superintelligence in my previous example would greatly improve the welfare of the US, p(“good things happen to USA”|”war with Canada”)~1. The US still shouldn’t declare war and Canada still shouldn’t cede Toronto.

        Essentially, policy makers should not use evidential decision theory. The causal structure that should guide decisions is hard to determine even with prediction markets.

  19. Robert Liguori says:

    “Wait, a war with Mexico would kill 10,000 soldiers? Hell, just draft another 10,000 and have them fight instead! The oracle didn’t say anything about them dying!”

    The problem with the hypothetical is that either the oracle is using Dramatic Predictions (and they’re going to be satisfied ironically regardless of what you do), or Actual Halting Predictions…but halting predictions don’t encompass themselves, so you need to handle, e.g., Mexico going “OK, new plan. We nuke America without actually declaring war, so the predicted events never take place! Genius!”

    Really, in both the market case and the AI-talk case, I don’t think you can realistically assume that oracle-level certainty exists. If you’re going to be in a military-level conflict with someone, especially someone you can kill outright, they are going to be putting all of their resources into finding ways to trick, fool, and frustrate you; since it’s easier to obfuscate than it is to understand, I’d assume that an AI could show its source code in such a way that obfuscates its actual likely reactions in a way which a reasonable challenger to the AI wouldn’t be able to reasonably de-obfuscate.

    So, given that we’re playing around with an extremely off-model concept with oracular predictions in the first place, it makes sense that they’d lead to extremely odd-sounding conclusions; they’re like a very specific “1+1=3” axiom leading to a a very specific principle of explosion.

    • Jiro says:

      That falls under precommitment again; being willing to fight to the death to inflict losses on the enemy decreases the likelihood of being in a fight-to-the-death scenario but only if you can’t take back the precommitment even when you’re in the wrong branch and it means you really do have to fight to the death.

    • RCF says:

      From a graphic design perspective, I don’t think that the Civil War casualty graph should have been shaded under the curve. A cumulative graph should be a line graph, a rate graph should be shaded. And I don’t see how the rate shot up after “no hope”. It looks to me like the rate is about constant from Shiloh to Chickamanga.

  20. Anonymous says:

    Since a number of people objected to the “inspect source code” part, there is a concept in cryptography called zero-knowledge proof, which could in principle help here.

  21. Anonymous says:

    A symmetrical notion would be to remove all information, instead of providing perfect answers, i.e., all countries have armies but nobody knows anything about their relative strength, so nobody dares to start wars. That is the premise of Stanislaw Lem’s book “Peace on Earth”.

  22. Shmi Nux says:

    Not sure about FAI or UFAI, but people do not work like that. Even if a reliable near-perfect Oracle is available, the outcome would not be appealing enough until people actually suffer enough for it. They also put an unreasonable premium on the last little details of the agreement, which are the hardest to predict.

    Consider collective bargaining. Generally 80% of the final agreement can be predicted with 80% certainly before the labor action (e.g. a strike or a lockout) even starts. However, it is nearly impossible to convince either side accept the predicted deal even if a 100% accurate oracle tells them, let alone if some minor details are in doubt. It takes enough suffering before both sides stop caring about them and just want to get things done.

    This seems like a combination of sunk cost, warped virtue ethics (an achievement is only worthwhile if paid for dearly), and hyperbolic discounting (of future suffering).

    Extrapolating to FAI vs UFAI, it is possible that they would be unable to agree over the size of the FAI box (each precious cubic mm of each means a few billions more of the happy ems) and fight it out until the final agreement is clear, possibly at the cost of a few billion less ems and paperclips than without the fight.

  23. Kaminiwa says:

    TL;DR: The entire world gets bought out by the richest country with a decent military, and this can’t take more than a century.

    This seems like a really obviously *undesired* result, so we should all agree not to give a shit about what the Oracle says.

    I kind of feel like this is just asking “how much money do I have to pay you to overlook the fact that I’m going to murder everyone you love”, and expecting a rational answer.

    War is fundamentally about conquering territory, which means displacing everyone who currently lives there, ripping apart thousand of lives. The land clearly has valuable resources or something else going for it, so the country is now smaller, more impoverished, and has the same number of people. It’s a fundamentally HUGE emotional wreck.

    The fact that a few thousand soldiers died, and a few million dollars of property were destroyed, just goes to show you that the land was *so valuable* that people were willing to pay that price – in fact, so willing to pay that price that they did it non-consensually. The actual consensual “fair asking price” is vastly higher, and it only came to war exactly BECAUSE one or both sides thought that military action would give them a “better price”

    Furthermore, now the US has a reputation for nobly implementing Oracular suggestions, rather than as a blood-thirsty menace that must be stopped. It enables them to make *future* threats more plausibly – Just like giving in to terrorist demands only encourages future terrorists to make demands.

    Finally, the even *if* the Oracle is 100%, it is *amazingly* difficult to prove things to that degree of confidence, and it would be truly miraculous if you could convince the thousands of displaced Mexican citizens to that degree of confidence.

    So you’re selling your country for 25% off the lowest price, which sets a precedent for cheaper acquisitions in the future. If you’re willing to do this, you’re letting your military slide, which lowers the price of future acquisitions. Your military is bored and inexperienced, since they never actually fight, which lowers their strength and morale, which lowers the price of future acquisitions.

    And of course, no one has any incentive to offer you a *higher* price, whereas war is emotionally and morally horrifying and thus people would have paid a much higher price to avoid it in the past.

    And, of course, with the additional resources from the conquered territory, and still having a full strength military, and having saved 25% on the cost of the attack, your enemy is in a much better position to demand a lower price for the next acquisition.

    So the end result is that the rich grow richer, because now there’s less moral outrage over conquest. Peaceful negotiations collapse entirely in to Oracular manipulations until one side has a clear advantage and the other is forced to sell themselves.

    It’s Libertarian “free market” meets Moloch, mixed in with a toxic dose of horribly designed and easily gamed incentives, implemented on a geo-political scale, and ignoring the evidence from every bad merger and buy-out in history. The US can probably conquer the entirety of the American continents, and at a 25% discount and minimal moral outrage, I can’t help but think they would.

    China probably acquires Asia. Europe might hold out for a while. Pretty soon, though, it’s the US and China, and we just have to pray they are evenly enough matched that neither can force the other to accept a buy-out.

    • Leonard says:

      The entire world gets bought out by the richest country with a decent military… This seems like a really obviously *undesired* result

      No. It’s not obvious to me. Is it really that obvious to you? I thought that a world state is fashionable. Don’t progressives still support the U.N.? I mean, what you are talking about here is an end to war. OK, so the price for that is being ruled by unaccountable Chinese communists. But is that really obviously horrible?

  24. Matt says:

    I’m not sure how literally we’re meant to take the war-oracle story, but I think it’s misleading. No country really places a consistent monetary value on human life. (Most likely they don’t have a well-defined utility function at all, but at best their utility function is complicated by the value they place on various procedural norms, to the point that they seem hugely inconsistent from a naively consequentialist perspective.) The political will to sacrifice 75% (or 100%) of the nominal monetary cost is very poor evidence of the will to commit to a real war with real casualties.

  25. Anonymous says:

    The point of fighting a war where your side is overwhelmingly outmatched – such as the Pacific war, seemingly suicidally started by Japan – is that your side being more desperate is more willing to suffer losses.
    Then you only need to make the other side suffer enough that the eventual peace agreement they will settle for in order to make it stop will be less favorable to them than would have been the case if the war had not been fought.

    In the case of the Pacific war, the Japanese thought that compared to the Americans they were more willing to shed blood for the sake of controlling Asia, both because the Americans were not expected to care much about the far away Asia, and because the Americans were deemed morally weaker and less willing to suffer losses than the stoic Japanese.

    Think of all local rebellion against large empires ever. The local rebels are always much weaker than the empire, but they are more willing to shed blood for their own land. The empire almost always had the power to crush the rebels if it wants, but sometimes it isn’t willing to pay the price. For example, the resistance to Roman empire in Germany. The Germans fought desperately for their lands, and the Romans just didn’t care enough – if they had cared, they would have rolled over them.

    Life is not a game of Risk.

    Now the interesting thing is that the equivalent of “giant hole agreements” of the kind described by Scott actually existed in some ways throughout history. Wars gets gradually more and more ritualized in historical eras in which nations harbor warmer feelings towards each other. Rules appears meant to minimize the shedding of blood. Eventually wars become more and more of a show or a game. I could make countless examples. The ritualized show-wars of the Italian Renaissance mercenaries. The gentlemanly warfare of the 18th century, where civilians were never harmed and both sides agreed to employ a number of tactics meant to minimize losses for both themselves and the enemy. The ritualized Greek hoplite warfare in the period before the Peloponnesian wars. The ancient Celtic duels between champions that settled whole wars. I believe some Mesoamerican nations had something similar with their war-deciding ball games. All these gentlemanly agreements break down when hatred and crucially differences of ideology between nations ramp up. If you’re the first to break such an agreement, your side can reap temporary benefits in getting on top. The American revolutionaries broke quite a few rules of gentlemanly warfare. After all their ideology was radically different from that of their enemies, so it made sense not to play by the rules. The bottom line is that casualty – reducing ritualization of warfare requires trust and similarity of ideologies.

  26. Anonymous says:

    If you’re just looking at the tradeoffs for the event itself, you’re doing the correct reasoning. But that’s totally irrelevant. It’s not about the fight and the results of the fight, it’s about all the other fights you’re not having because you make fighting you a risky proposition! Who cares about one fight, one war, one whatever? It’s totally irrelevant when compared to the whole, and the way to survive the environment where you can be challenged is to respond to challenges in a way that most potential challengers do not like. Otherwise you’ll simply be swamped with challenges and eventually destroyed. Whether in an INDIVIDUAL circumstance your belligerence hurts you does not matter at all.

    Yes, one fight, one war, can kill or destroy you. But if you’re a guaranteed good target for fights and wars, that’s guaranteed to fuck you up in the long run. This doesn’t mean you need to respond to every challenge aggressively – but it does mean that you CANNOT refuse to fight in every circumstance where fighting hurts you.

    Responding to “Hey, I’ll take five utils from you if you don’t fight, and fifty guaranteed if you do fight, so give up OK?” with “FUCK YOU I WILL TEAR YOUR FACE OFF” is actually COMPLETELY RATIONAL, because as a matter of fact EVERYONE has the ability to make such threats AND HAVE THEM BE TRUE! Anyone can, correctly, tell you that the five dollars in your pocket are worth less than not getting pepper sprayed. In order to avoid constantly being either penniless or peppersprayed, you have to respond in a way that looks irrational if you look at the individual case.

  27. I apologize for the derailment but I don’t quite know where else to ask this…
    Where did the idea that an extremely powerful AI will almost certainly have an infinitely large fleet of omnipotent nanobots at its disposal come from? It seems every time AI is brought up on this site, everyone just assumes that nanite-derived Semi-Phenomenal Nearly-Cosmic Power is just part of the package, par for the course.

    I will admit I’m a complete layman when it comes to computer science, let alone futurology… but it always struck me as an oddly specific prediction, and one that everyone was oddly confident in. Once again, I’m a layman…. but it seems like once you’re talking about infinite paperclips, you’ve hand-waved away thousands of engineering questions…

    Is there some article on LW somewhere that definitively proves “There Will Be Nanites”?

    • Watercressed says:

      Eric Drexler has done a pretty good job of showing that atomically precise engineering is possible, and the AI will probably have enough optimization power to build them.

      • Bugmaster says:

        As far as I know, lots of people criticize Drexler for his optimism regarding nanotech. I don’t think it’s fair to say that non-biological nanotech is a foregone conclusion (of course, biological nanotech is a foregone conclusion since we are made out of it).

        Unfortunately, it looks like an AI will need nanotech to exist in order to become godlike, so I’m not convinced that it is possible, either.

        Most people just handwave such problems away by saying, “oh, well, with enough intelligence all these engineering problems are easy”, but I am no convinced that nanotech is an engineering problem at all.

        By analogy, it looks like traveling faster than light is impossible. In principle. It doesn’t matter how smart you are; in fact, it is very likely that as you get smarter and smarter, you will be able to understand why FTL is impossible better and better.

        In other words, intelligence is not a magic wand.

    • suntzuanime says:

      The assumption is that an extremely powerful AI would be able to solve those thousands of engineering questions. Basically the thinking goes, nanotech seems like something that should be possible if you were smart enough to figure it out, so if you make a really smart thing, it will get nanotech.

      • Matthew says:

        The “superintelligences will have nanotech” supposition doesn’t strike me as that improbable. The “superintelligences will propagate at the speed of light” supposition, OTOH, seems to me to come out of nowhere and defy physics as we understand it. I’d like to hear an explanation for this one, as long as we’re meandering away from the main subject of the post.

        • jaimeastorga2000 says:

          Link? I don’t recall seeing any high-level transhumanists assume that superintelligences can propagate at the speed of light. Maybe they were taking the speed of light as an upper bound on a superintellgence’s expansion rate, or perhaps they were talking about the superintelligence being able to broadcast information which propagates at the speed of light?

      • ah, I see. yes, that makes sense…
        like the old “infinite monkeys” thought experiment. with near-unlimited time and processing power, the possible becomes the inevitable. thank you.

  28. Jiro says:

    You can often model “irrationally refuses to do X because of honor” as a form of precommitment. If Mexico precommits to not make the transaction “Mexico sells the land, the US gets the land cheaper than conquest and Mexico loses less than they would in a war”, Mexico will lose if the US decides to go to war, but will gain if the US refuses to go to war (which they can consistently do if they are willing to pay the cheaper price but not the full conquest price).

    However, Mexico can’t make this decision and then change their mind once it becomes obvious that the US is willing to go to war. If they do that, the best thing for the US to do would be to be willing to go to war every time, whereupon Mexico would always change its mind. Being willing to change their mind in the war scenario prevents there from being worlds with the non-war scenario.

    Thus, Mexico “irrationally” resolves to never sell and never change its mind even if changing its mind is beneficial. Being “irrationally” unwilling to do the best thing in situation X could be the result of a precommitment that reduces the chance you get into situation X in the first place.

    (My usual example for this is that people will drive at a cost of more than $X to save $X on an item. This “irrational” behavior is the result of a precommitment; if you’re willing to shop at the store with the lowest prices even if it’s far away, then the nearby stores have to compete with the faraway stores, so the nearby stores are more likely to reduce their prices, and it becomes less likely that the faraway store has the lowest prices in the first place. But you can’t change your mind and shop at the nearby store once you discover that the nearby store didn’t reduce its prices but is cheaper because there’s no driving cost, or the nearby store would anticipate this, never reduce its prices, and you’d lose out.)

    • Matthew says:

      My usual example for this is that people will drive at a cost of more than $X to save $X on an item.

      In theory, I agree that this is what should happen; in practice, especially with more expensive items, one often seems to see the opposite. If Bob can buy a car next door for $10k, or could buy it a few towns away for $9900, he may not bother to make the drive. This despite the fact that he would obviously do so to save $100 on a toothbrush.

      • Anonymous says:

        Bob’s behavior makes perfect sense if you think of it as consistently following the policy of expanding effort to save 1000% of the transaction value but not to save 1%. And, as a practical matter, it is much easier in real life to follow a simple policy than to reason your way to the optimal decision every time a decision needs to be taken.

  29. pneumatik says:

    I think Thomas Shelling would say that the Oracle or prediction market results are just part of the pre-negotiation interplay. Other posters have brought up issues like future negotiations influencing current ones and the impact on your military of never actually fighting, and I’m sure there are others. Oracles would make things different, but the existence of an infallible Oracle would probably change other aspects of the world much more than pre-hostilities negotiations.

    Also, the decision of whether or not to go to war is never really made by a country so much as it’s made by the person in charge of that country. So if that person and their immediate confidants want to go to war then the country will go to war. This can really remove people from the impacts of going to war. The President is worried about staying in power. If his advisers tell him that if he doesn’t go to war he’ll lose power then he’s strongly incentivized to go to war. The people voting for him (or whatever) aren’t voting for war so much as they’re voting for the politician they support. They also support war, but more in the sense of not backing down than in the sense of people actually dying.

  30. DanielLC says:

    It’s a useful strategy to prevent someone from benefiting from hurting you regardless of how much extra this hurts you. This way, they’re not incentivised to hurt you. You’d only start giving them stuff if you can’t just threaten to hurt them to get them to stop. Even then, you shouldn’t give them much more than they’d profit.

  31. Adam Long says:

    Somebody already mentioned Thomas Schelling but his fantastic book “Strategy of Conflict” I think has much to say on the issues presented in this post. Schelling reminds us to consider the different dynamics in what game theory calls NON-zero sum games. To summarize a fascinating and subtle book, the argument is that it can be highly “rational” to convince the other side that you are willing to do the “irrational” thing in a nonzero sum game. Precommittment is one way to do this. Another is to convince the other party that you really are crazy enough to light a match in a room full of gasoline, and so the other party is willing to give you all his money to prevent you from destroying both of you.

    • Jiro says:

      Isn’t “I’m crazy enough to destroy us both” just another way to phrase precommiting to “destroy us both, which is disadvantageous in worlds where the precommitment forces me to do it but which reduces the measure of such worlds”?

  32. JB says:

    Do prediction markets still work under conditions of information asymmetry? When a government develops nuclear weaponry, that’s not necessarily something that they tell everybody. Then they might think they can outguess the market.

  33. Mike Blume says:

    ETA: addressed upthread

    The main problem I see is that for the market to *be about* something, you still have to occasionally fight a war. Like say you fight one war out of ten, the markets can all be of the form “conditional on this being one of the wars we actually fight…”, but if you never fight a war, there’s no reason to participate in a market, it’s never actually going to pay out.

  34. Lalartu says:

    One more flaw in this reasoning, not mentioned before (for prediction market case):
    Price of losing war often is not territory, but nation’s sovereignty (and/or dictator’s power). This price is perceived as so huge that any chance of victory , no matter how small is worth figting for.

  35. TrU says:

    Your proposed mechanism for peaceful settling of such rivalry – with burying money – sounds an awful lot like a Vickrey-Clarke-Groves mechanism: Both parties write down what the piece of land is worth to them, the socially optimal solution as per those values is implemented, and every participant X pays their social cost, that is, the difference of (other’s valuations with the actual solution implemented) and (other’s valuations with that solution implemented which would be socially optimal if X’s values were not taken into account). Both negative and positive Payments are possible.

    Then, with war, actually waging war would be a third outcome next to (Mexico keeps the land) and (USA takes the land). This outcome, of course, may never be picked by the mechanism as it’s strictly worse than one of the alternatives.

    I haven’t actually calculated how this would work out in your Mexico scenario, and I assume it doesn’t because I’ve overlooked something, like, you need an infinitely powerful arbiter.

  36. Ghatanathoah says:

    >”One factor that prevents wars is countries being unwilling to pay the cost even of wars they know they’ll win. If there were a tradition of countries settling wars by appeal to oracle, “invasions” would become much easier. America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion.”

    Why would the oracle predict the US would win? Wouldn’t it actually say “You will lose before you even begin because you won’t be able to muster the political and economic will to launch the invasion.”

  37. Deiseach says:

    But why do you say the Mexicans were stupid (or only obsessed with national honour) if they were unwilling to cede the Southwest to the U.S.A.?

    Surely, part of the reason for going to war is that you think you will gain a benefit by doing so. And part of the reason for refusing to hand over Toronto would be that the Canadians think that, in the long run, having Toronto is a benefit to them.

    If the U.S.A. is willing to buy the Southwest or else go to war to get it, then it must be valuable. Why should Mexico hand over a valuable asset for nothing (or at least, for the price the U.S.A. was offering, which I’m sure the Mexican government suspected was below the ultimate value the U.S.A. thought it would get out of having the Southwest)? Maybe in the long run, the value of the Southwest would outweigh the cost of a war.

    Did the value of the Southwest outweigh the cost to the U.S.A. of the war? If someone came along and said “Hey, I’ll take that off your hands for a couple of trillion”, should the present-day U.S.A. sell it?

    It’s one thing if I find an old violin in the attic and put it up for sale on eBay; it’s quite another if, out of the blue, someone comes along and says “Oh, hey, that useless old violin you have? I’ll take it off your hands, I can go as high as fifty quid, you willing to sell?”

    I mean, you’d have to immediately wonder if the violin was worth more than fifty quid and the prospective purchaser knew he could sell it on for a couple of hundred or even thousands.

    • RCF says:

      Part of the premise is that Mexico knows that it will lose.

      • Deiseach says:

        But most wars, both sides think they have a fair chance of winning. Or if not a fair chance, that’s why we strike first without warning! Or if not a fair chance and they struck first without warning, then we have General Winter to fight for us!

        Or if we don’t have General Winter, then the arts that won a crown must it maintain, and the aggressor is bogged down for a long time holding on to the territory it conquered.

        Which reminds me – how is Iraq doing these days, after “mission accomplished” back in 2003?

        • JME says:

          Are you saying that the whole all-knowing oracle premise is so ridiculous that it isn’t worth a thought-experiment?

        • RCF says:

          Perhaps you should educate yourself on what the word “premise” means, and stop wasting our time with posts that display on utter failure to understand what you’re responding to.

          • Anonymous` says:

            Yay, the rare “True and Necessary, but not Kind”! Although it would’ve been pretty easy to hit all three, like JME did.

  38. Bugmaster says:

    Is there any evidence to suggest that political prediction markets are actually infallible — or, if not infallible, then at least several orders of magnitude more accurate than old-school prediction methods ? What is the mechanism that makes the market nearly infallible ?

  39. Randall Randall says:

    If an oracle told the US that the Vietnam War would cost 50,000 lives and a few hundred billion dollars, and the communists would conquer Vietnam anyway, the US probably would have said no thank you.

    There’s an argument I’ve seen made (and find plausible) that the primary outcome of the Vietnam war was that the US demonstrated beyond any doubt to the USSR that the USSR would lose any conventional war with the US in which the US was fully invested.

  40. JME says:

    There’s also the relevant SMBC comic on this, although it is slightly confused about Greeks, Lydians, and Medeans I think.

    • Anonymous says:

      Cyrus was (according to the story of that oracular pronouncement) half Medean, not half Greek. That’s a weird change.

      As for Lydians, I think it’s pretty reasonable. I guess you are referring to the Oracle’s implication that Croesus is remembered as an ancient Greek, not as a Lydian. But that’s pretty much true. Lydia is today remembered only through Greek myth. Croesus is remembered for ruling over Anatolian colonies of Greeks and allying with mainland Greeks, not for being an ethnic Lydian or for ruling Lydians.

  41. ZZMike says:

    Another question is, why were there oracles? And why did people continue to consult them? After the first few times, they should have figured out that the oracle is going to give ambiguous replies – ones that could be taken either way (or more than one).

    Was it because the questioners could shirk responsibility for their own actions? “Yeah, but the oracle said ……..”

    I remember a Star Trek episode in which wars were settled by a vast computer, which decided the outcome based on many of the things you mention. The losing side sacrificed the appropriate number of citizens in the disintegration chambers.

    • Nornagest says:

      When I was reading Herodotus, I remember being struck by how authoritative the oracles seemed to be considered. They’re described not in the terms we’d use for a horoscope or even for a religious prophecy, but more like what we’d use for economic forecasting.

      It’s hard for me to wrap my head around, I’ve got to admit. But I get the sense that misinterpretation of an oracle was in some sense considered the interpreter’s fault — although the whole business of interpretation was clearly an expert domain, and one fraught with difficulties.

      • Anonymous says:

        All I’ve read of Herodotus is the story of Croesus, but he doesn’t trust the Oracles – he tests them. He sends emissaries to several Oracles, to ask what he’s doing on a particular day. Most of them fail, but Delphi correctly identifies his lunch and Amphiaraus is almost as good. Only then does he ask them advice about war and receive cryptic advice.

      • Jaskologist says:

        You should check out Plutarch’s Decline of the Oracles. Ever since I did, the prospect that oracles/magic used to work, but don’t any more has fascinated me (and this isn’t the only ancient writing which hints at that).

        There is no need to make any inquiries nor to raise any questions about the state of affairs there, when we see the evanescence of the oracles here, or rather the total disappearance of all but one or two; but we should deliberate the reason why they have become so utterly weak. What need to speak of others, when in Boeotia, Fwhich in former times spoke with many tongues because of its oracles, the oracles have now failed completely, even as if they were streams of flowing water, and a great drought in prophecy has overspread the land?

      • Jaskologist says:

        They were probably not any more consistent in reverence for oracles than “we” are for horoscopes. Way, way back in 400AD, St. Augustine performed the first twin study and concluded that astrology was bunk. On the other hand, Kepler in 1600 earned a reputation (and money) as a skilled astrologer, and even tried to rehabilitate the field.

        • Nornagest says:

          I wasn’t trying to point out reverence so much as the way prophecy was treated as a practical authority. The Classical world certainly didn’t always trust the oracles. But they didn’t seem to place them in a separate magisterium the way we might.

          That’s what I was trying to get at with the comparison to economic forecasting. We may or may not trust a think tank when it says interest rates will go down. But we assume it to be based on something concrete, and the Classical world seemed to treat “well, I went and asked Apollo the other day” in much the same way.

          This may have changed by Augustine’s time; I don’t know a lot about early Christianity but I get the sense that its approach to the supernatural was different than the Classical. By Kepler’s, the modern approach had definitely started evolving.

  42. Lambert says:

    Of course, this mechanism of one side giving the other a sum of money is present, not in literal battles, but legal ones. These concepts & debates are, most likely, present in the art of legal settlements.

  43. Ginkgo says:

    “America might just ask “Hey, oracle, what would happen if we invaded Canada and tried to capture Toronto?” The oracle might answer “Well, after 20,000 deaths on both sides and hundreds of millions of dollars wasted, you would eventually capture Toronto.” Then the Americans could tell Canada, “You heard the oracle! Give us Toronto!” – which would be free and easy – when maybe they would never be able to muster the political and economic will to actually launch the invasion. ”

    In fact this doesn’t take an oracle, just combatants that value their lives more than whatever benefits might accrue form winning, as when those benefits accrue to someone else. So for instance during the Renaissance in Italy where wars were typically fought with mercenaries, mercenaries on opposing sides would confer (why not; they weren’t actual enemies) decide who was likeliest to win, and then report back to their clients what the likely outcome was and settle the squabble that way. It still cost the clients some money, but the collateral damage was kept to a minimum.

    “What about when the aggressor wins? For example, the Mexican-American War, where the United States won the entire Southwest at a cost of “only” ten thousand American casualties and $100 million (with an additional 20,000 Mexican deaths and $50 million in costs to Mexico)?”

    This war shows all you draw from it, but it shows something deeper even more clearly. it is an example of cultural conventions – in this case the doctrine of national integrity and national boundaries – overruling realistic analysis.

    Realistic analysis would have shown that Mexico had no real control over those areas. The Comanche controlled almost all of Texas, and quite a bit more, and the Hopi and Zuni had crippled Spanish (and thus Mexican) control in their areas in 1680. Nobody but the Navajo controlled their area, and southern Arizona was controlled by the Maricopa-Pima alliance. In all these cases the wars and treaties with Mexico were basically idle, and the Texans and the US took control of these areas either through warfare or by treaty. The Maricopa-Pima alliance specifically had the US out-gunned, so there was no chance of war, and they decided they were tired of the Mexicans so they threw in with the US.

  44. Pingback: Futarchy, superintelligences, and assorted links to related topics | kewl beans

  45. The Citizen says:

    Another great post.

    So basically, Prisoner’s Dilemma. In most real-world scenarios I can think of, it’s possible to some degree for there to be deception involved and for the agreement to not be honoured. Assuming a betrayal can’t be hidden once it starts, you’ve got a situation very similar to the Prisoners. Assuming they are totally rational parties, you’ve got the issue that the last iteration betrayal has no consequence, and therefore the previous iteration will predict that and become equally treachorous, right back all the way to the start.

    Of course, maybe there’s a consequence for countries that betray, but this would of course be severely reduced if the betrayal involved almost completely crushing the other country. I don’t doubt that rationality helps (“irrational” countries would just fight regardless), but I think you would still get a situation similar to the Cold War (two partially rational actors with different goals).

  46. Watchmaker says:

    The conclusion I drew from this is that certain standards of behavior must be enforced before a market is implemented. If futarchy enables extortion, then futarchy causes more extortion. A properly functioning market isn’t just game theory between rational agents. It’s girded by legal and, more importantly, cultural norms.

    If a properly functioning futarchy is possible, it will have to disallow certain types of coasean bargaining which would incentive the creation of more coasean bargaining.

  47. Pingback: In the grim darkness of the far future there is only war continued by other means » Death Is Bad

  48. Unsourced claim that I recall from a documentary: in WWII the Luftwaffe would drop leaflets on some of their targets in France, with essays explaining how Nostradamus had not only predicted the battle in that town, but that the Germans would win, prompting the French target town to surrender.