Ascended Economy?

[Obviously speculative futurism is obviously speculative. Complex futurism may be impossible and I should feel bad for doing it anyway. This is “inspired by” Nick Land – I don’t want to credit him fully since I may be misinterpreting him, and I also don’t want to avoid crediting him at all, so call it “inspired”.]

I.

My review of Age of Em mentioned the idea of an “ascended economy”, one where economic activity drifted further and further from human control until finally there was no relation at all. Many people rightly questioned that idea, so let me try to expand on it further. What I said there, slightly edited for clarity:

Imagine a company that manufactures batteries for electric cars. The inventor of the batteries might be a scientist who really believes in the power of technology to improve the human race. The workers who help build the batteries might just be trying to earn money to support their families. The CEO might be running the business because he wants to buy a really big yacht. The shareholders might be holding the stock to help save for a comfortable retirement. And the whole thing is there to eventually, somewhere down the line, let a suburban mom buy a car to take her kid to soccer practice. Like most companies the battery-making company is primarily a profit-making operation, but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.

Now imagine the company fires the inventor and replaces him with a genetic algorithm that optimizes battery design. It fires all its employees and replaces them with robots. It fires the CEO and replaces him with a superintelligent business-running algorithm. All of these are good decisions, from a profitability perspective. We can absolutely imagine a profit-driven shareholder-value-maximizing company doing all these things. But it reduces the company’s non-masturbatory participation in an economy that points outside itself, limits it to just a tenuous connection with soccer moms and maybe some shareholders who want yachts of their own.

Now take it further. Imagine that instead of being owned by humans directly, it’s owned by an algorithm-controlled venture capital fund. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

Now take it even further, and imagine this is what’s happened everywhere. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.

This is obviously weird and I probably went too far, but let me try to explain my reasoning.

The part about replacing workers with robots isn’t too weird; lots of industries have already done that. There’s a whole big debate over to what degree that will intensify, and whether unemployed humans will find jobs somewhere else, or whether there will only be jobs for creative people with a certain education level or IQ. This part is well-discussed and I don’t have much to add.

But lately there’s also been discussion of automating corporations themselves. I don’t know much about Ethereum (and I probably shouldn’t guess since I think the inventor reads this blog and could call me on it) but as I understand it they aim to replace corporate governance with algorithms. For example, the DAO is a leaderless investment fund that allocates money according to member votes. Right now this isn’t super interesting; algorithms can’t make too many difficult business decisions so it’s limited to corporations that just do a couple of primitive actions (and why would anyone want a democratic venture fund?). But once we get closer to true AI, they might be able to make the sort of business decisions that a CEO does today. The end goal is intelligent corporations controlled by nobody but themselves.

This very blog has an advertisement for a group trying to make investment decisions based on machine learning. If they succeed, how long is it before some programmer combines a successful machine investor with a DAO-style investment fund, and creates an entity that takes humans out of the loop completely? You send it your money, a couple years later it gives you back hopefully more money, with no humans involved at any point. Such robo-investors might eventually become more efficient than Wall Street – after all, hedge fund managers get super rich by skimming money off the top, and any entity that doesn’t do that would have an advantage above and beyond its investment acumen.

If capital investment gets automated, corporate governance gets automated, and labor gets automated, we might end up with the creepy prospect of ascended corporations – robot companies with robot workers owned by robot capitalists. Humans could become irrelevant to most economic activity. Run such an economy for a few hundred years and what do you get?

II.

But in the end isn’t all this about humans? Humans as the investors giving their money to the robo-venture-capitalists, then reaping the gains of their success? And humans as the end consumers whom everyone is eventually trying to please?

It’s possible to imagine accidentally forming stable economic loops that don’t involve humans. Imagine a mining-robot company that took one input (steel) and produced one output (mining-robots), which it would sell either for money or for steel below a certain price. And imagine a steel-mining company that took one input (mining-robots) and produced one output (steel) which it would sell for either money or for mining-robots below a certain price. The two companies could get into a stable loop and end up tiling the universe with steel and mining-robots without caring whether anybody else wanted either. Obviously the real economy is a zillion times more complex than that, and I’m nowhere near the level of understanding I would need to say if there’s any chance that an entire self-sustaining economy worth of things could produce a loop like that. But I guess you only need one.

I think we can get around this in a causal-historical perspective, where we start with only humans and no corporations. The first corporations that come into existence have to be those that want to sell goods to humans. The next level of corporations can be those that sell goods to corporations that sell to humans. And so on. So unless a stable loop forms by accident, all corporations should exist to serve humans. A sufficiently rich human could finance the creation of a stable loop if they wanted to, but why would they want to? Since corporations exist only to satisfy human demand on some level or another, and there’s no demand for stable loops, corporations wouldn’t finance the development of stable loops, except by accident.

(for an interesting accidental stable loop, check out this article on the time two bidding algorithms accidentally raised the price of a book on fly genetics to more than $20 million)

Likewise, I think humans should always be the stockholders of last resort. Since humans will have to invest in the first corporation, even if that corporation invests in other corporations which invest in other corporations in turn, eventually it all bottoms down in humans (is this right?)

The only way I can see humans being eliminated from the picture is, again, by accident. If there are a hundred layers between some raw material corporation and humans, then if each layer is slightly skew to what the layer below it wants, the hundredth layer could be really really skew. Theoretically all our companies today are grounded in serving the needs of humans, but people are still thinking of spending millions of dollars to build floating platforms exactly halfway between New York and London in order to exploit light-speed delays to arbitrage financial markets better, and I’m not sure which human’s needs that serves exactly. I don’t know if there are bounds to how much of an economy can be that kind of thing.

Finally, humans might deliberately create small nonhuman entities with base level “preferences”. For example, a wealthy philanthropist might create an ascended charitable organization which supports mathematical research. Now 99.9% of base-level preferences guiding the economy would be human preferences, and 0.1% might be a hard-coded preference for mathematics research. But since non-human agents at the base of the economy would only be as powerful as the proportion of the money supply they hold, most of the economy would probably still overwhelmingly be geared towards humans unless something went wrong.

Since the economy could grow much faster than human populations, the economy-to-supposed-consumer ratio might become so high that things start becoming ridiculous. If the economy became a light-speed shockwave of economium (a form of matter that maximizes shareholder return, by analogy to computronium and hedonium) spreading across the galaxy, how does all that productive power end up serving the same few billion humans we have now? It would probably be really wasteful, the cosmic equivalent of those people who specialize in getting water from specific glaciers on demand for the super-rich because the super-rich can’t think of anything better to do with their money. Except now the glaciers are on Pluto.

III.

Glacier water from Pluto sounds pretty good. And we can hope that things will get so post-scarcity that governments and private charities give each citizen a few shares in the Ascended Economy to share the gains with non-investors. This would at least temporarily be a really good outcome.

But in the long term it reduces the political problem of regulating corporations to the scientific problem of Friendly AI, which is really bad.

Even today, a lot of corporations do things that effectively maximize shareholder value but which we consider socially irresponsible. Environmental devastation, slave labor, regulatory capture, funding biased science, lawfare against critics – the list goes on and on. They have a simple goal – make money – whereas what we really want them to do is much more complicated and harder to measure – make money without engaging in unethical behavior or creating externalities. We try to use regulatory injunctions, and it sort of helps, but because those go against a corporation’s natural goals they try their best to find loopholes and usually succeed – or just take over the regulators trying to control them.

This is bad enough with bricks-and-mortar companies run by normal-intelligence humans. But it would probably be much worse with ascended corporations. They would have no ethical qualms we didn’t program into them – and again, programming ethics into them would be the Friendly AI problem, which is really hard. And they would be near-impossible to regulate; most existing frameworks for such companies are built on crypto-currency and exist on the cloud in a way that transcends national borders.

(A quick and very simple example of an un-regulate-able ascended corporation – I don’t think it would be too hard to set up an automated version of Uber. I mean, the core Uber app is already an automated version of Uber, it just has company offices and CEOs and executives and so on doing public relations and marketing and stuff. But if the government ever banned Uber the company, could somebody just code another ride-sharing app that dealt securely in Bitcoins? And then have it skim a little bit off the top, which it offered as a bounty to anybody who gave it the processing power it would need to run? And maybe sent a little profit to the programmer who wrote the thing? Sure, the government could arrest the programmer, but short of arresting every driver and passenger there would be no way to destroy the company itself.)

The more ascended corporations there are trying to maximize shareholder value, the more chance there is some will cause negative externalities. But there’s a limited amount we would be able to do about them. This is true today too, but at least today we maintain the illusion that if we just elected Bernie Sanders we could reverse the ravages of capitalism and get an economy that cares about the environment and the family and the common man. An Ascended Economy would destroy that illusion.

How bad would it get? Once ascended corporations reach human or superhuman level intelligences, we run into the same AI goal-alignment problems as anywhere else. Would an ascended corporation pave over the Amazon to make a buck? Of course it would; even human corporations today do that, and an ascended corporation that didn’t have all human ethics programmed in might not even get that it was wrong. What if we programmed the corporation to follow local regulations, and Brazil banned paving over the Amazon? This is an example of trying to control AIs through goals plus injunctions – a tactic Bostrom finds very dubious. It’s essentially challenging a superintelligence to a battle of wits – “here’s something you want, and here are some rules telling you that you can’t get it, can you find a loophole in the rules?” If the superintelligence is super enough, the answer will always be yes.

From there we go into the really gnarly parts of AI goal alignment theory. Would an ascended corporation destroy South America entirely to make a buck? Depending on how it understood its imperative to maximize shareholder value, it might. Yes, this would probably kill many of its shareholders, but its goal is to “maximize shareholder value”, not to keep its shareholders alive to enjoy that value. It might even be willing to destroy humanity itself if other parts of the Ascended Economy would pick up the slack as investors.

(And then there are the weirder problems, like ascended corporations hacking into the stock market and wireheading themselves. When this happens, I want credit for being the first person to predict it.)

Maybe the most hopeful scenario is that once ascended corporations achieved human-level intelligence they might do something game-theoretic and set up a rule-of-law among themselves in order to protect economic growth. I wouldn’t want to begin to speculate on that, but maybe it would involve not killing all humans? Or maybe it would just involve taking over the stock market, formally setting the share price of every company to infinity, and then never doing anything again? I don’t know, and I expect it would get pretty weird.

IV.

I don’t think the future will be like this. This is nowhere near weird enough to be the real future. I think superintelligence is probably too unstable. It will explode while still in the lab and create some kind of technological singularity before people have a chance to produce an entire economy around it.

But given Robin’s assumptions in Age of Em – hard AI, no near-term intelligence explosion, fast economic growth – but ditching his idea of human-like em minds as important components of the labor force – I think something like this would be where we would end up. It probably wouldn’t be so bad for the first couple of years. But eventually ascended corporations would start reaching the point where we might as well think of them as superintelligent AIs. Maybe this world would be friendlier towards AI goal alignment research than Yudkowsky and Bostrom’s scenarios, since at least here we could see it coming, there was no instant explosion, and a lot of different entities approach superintelligence around the same time. But given that the smartest things around are encrypted, uncontrollable, unregulated entities that don’t have humans’ best interests at heart, I’m not sure they would be in much shape to handle the transition.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

395 Responses to Ascended Economy?

  1. Anonymous says:

    >I think superintelligence is probably too unstable. It will explode while still in the lab and create some kind of technological singularity before people have a chance to produce an entire economy around it.

    I often see this argument, but as a person somewhat familiar with machine learning I don’t see much evidence that this is going to happen. Modern ML, especially deep learning, learns through trial and error. It requires a lot of time, data and expert guidance (hyperparameter tuning etc) to learn anything. I don’t see how access to its source code (why code? access to model parameters is much more natural) will lead to intelligence explosion. The most probable outcome seems to be just error in rewriting its code.

    Also what I don’t like in this hypothesis is that it is assumed that AI (= ML) system can improve itself without training on new data (which goes against definition of machine learning btw), and it is hard to imagine an ML system doing huge precise task (rewriting the code) right from the first attempt, especially without any prior examples to learn form.

    Another argument against intelligence explosion is the (sad) reality of technological stagnation – moore’s law is already dead. There is no certainty in computing hardware progress anymore. 10nm looks like the last process node, maybe 7nm but that’s it. For me the question stands like this: “Is 10nm hardware enough for a subhuman AGI, at least?”.

    If you want my take on real-world AGI development scenario then it looks like this:
    DeepMind or OpenAI continue to develop increasingly general deep reinforcement learning agents. The moore’s law is dead for some time now. Progress in deep reinforcement learning requires more and more training tricks and hardware optimizations (FP16 -> Fixed Point -> BinaryNet), probably to the point of using specialized ASICs for training and inference (= longer development time than software solutions). If we are lucky, with all optimizations and on the latest tech node DeepMind or OpenAI can demonstrate a subhuman reinforcement learning algorithm. They are going to train it to do do science, especially biology, then copy it hundreds of times and run it in the cloud. The AGIs are stupider than human researchers, but due to sheer numbers and fast database/google access they manage to churn out lots of good results. Then this results get applied and sold by parent company.
    The alternative outcome is simply “nothing works; not enough hardware; cannot wait for moore’s to supply better hardware because there is no moore anymore”.

    Of course I may be wrong, maybe there is unknown machine learning algorithm capable of “recursive self-improvement”, or maybe deep reinforcement learning algorithms could show this trait. But it hasn’t been demonstrated so far.

    TL;DR; Death of Moore’s law and characteristics of current ML algorithms don’t look like a foundation for intelligence explosion. It seems probable that this state of affairs will shield us from the worst outcome.

    • MicaiahC says:

      It’s not that it can’t train on new data, but that one of the following outcomes would happen:

      1) It uses existing data to gain the sorts of insights that allows it to massively improve its current capabilities (e.g. discovers a class of errors in crypto implementations) then uses those capabilities to get more data+processing power.
      2) It’s being fed lots of data anyway and it’s the default expectation, and fed it in the way that allows it to “””fake””” ignorance. It ends up being really powerful without appearing so, and when the alpha strike comes, it may have spent a year or two at a formidable degree of intelligence.
      3) New machine learning techniques involve lots of recursive/mutually recursive new techniques, like dunno nested supervisors who tune the hyperparameters and are also allowed to create their own training data. The code isn’t necessarily rewritten directly, but just because the algorithm isn’t writing to a text file and typing in gcc -Oinfinity doesn’t mean that it isn’t self modifying! Code is data and data is code! In addition, if there were a problem in self-rewrites, this is a problem that humans would want to fix! You’d have to give a reason why humans wouldn’t try to fix a (currently) error prone process, and that it’s impossible, or highly unlikely that they wouldn’t find any “fast capabilities growth” at any time in the future.

      As a side note, I feel like you’re not trying to understand the opposing point of view, just because people said “code” instead of “hyperparameters” as a way to self improvement means that, instead of complaining about AI risk people proposing something infeasible, you could have realized that “code” is a stand-in for “any process that an AI could use to increase its capability” and taken it as an argument for.

      I feel like stuff about Moore’s law is a distraction. Yes I suppose it is important, but there are ways to have rapid capability gain other than being stuck on the same computer. e.g. being successful enough that Google buys your tiny startup out and then hooks the AI up to a supercluster, having access to the internet, initial success of the AI’s prediction causes the owner of the AI to get richer, which allows them to buy more hardware, Moore’s law being stopped for general hardware, but not for hardware optimized for current or future ML techniques, algorithmic advancements in compiler optimization etc.

      Even granting all of the above is bad, what does that place your confidence at? Is it 10%? Is it 1%? Is it one in a thousand? I feel like the “end of the human race” requires much more certainty against it than heuristic arguments provide.

      • Madrocketsci says:

        “Even granting all of the above is bad, what does that place your confidence at? Is it 10%? Is it 1%? Is it one in a thousand? I feel like the “end of the human race” requires much more certainty against it than heuristic arguments provide.”

        The confidence you assign to events that you have no data about (no experience, no frequency count for) is arbitrary. It doesn’t contain information about the world. This is the problem with precautionary principle based reasoning about events that have never happened before.

        Number of asteroid strikes per year: We have data about this and can use probabilistic reasoning.

        Number of AI apocalyspes: 0 – here the prior probabilities you assign really say nothing about the world and everything about your assumptions. We won’t know how good anyones assumptions are until we have AIs to play with.

        You could use the exact same reasoning to argue precautionary principles about placating evil gods. For every nonzero epsilon “prior-confidence” you assign to the existence of an evil god, there exists a delta of divine evil that will make it mistakenly-rational to spend your life trying to placate it.

    • Madrocketsci says:

      This is similar to my views on the whole intelligence explosion/instantly supersuccessful AI ideas.

      I’m more sanguine about computers: There’s nothing that says you can’t take the absurdly fast processors we already have (fast relative to neurons, GHz vs 100Hz) and just stack more of them side-by-side. You would not be able to solve sequential problems arbitrarily fast that way, but our brains can’t do that anyway. You could be able to solve parallelizable problems arbitrarily fast with arbitrary numbers of computing cores at your disposal. Our brains are an existence proof for intelligence supporting hardware.

      But superintelligence: I really have no realistic model for that (and I’m a constructivist at heart). If I did have a realistic model for that, I would be busy setting off the AI revolution that everyone is so afraid of :-P.

      Do the laws of physics forbid some sort of black box that speed-runs the world the moment you hit the power-button? (Assuming (and this is also important) that it is connected to sufficient hardware that it can meaningfully influence the world?) No, but only sort of. There are reasons that this scenario looks very suspicious from a second-law perspective: The box makes very little sense as an *initial condition* for a physical system with no information about the world. That black box, somehow, starts with near perfect information about the world in a highly ordered state.

      Other objects in the same class are fragments of a tea-cup, every atom of which starts in a state where the teacup will leap off the floor and reassemble itself. Physically possible? Sure! The laws of physics are time-symmetric. Realistic as a “typical” initial condition? No.

      You can have an arbitrarily competent learning algorithm, but the fact that it *has to first learn* about the domain in which it is operating ties it to the timescales at which the domain in which it is operating operates. Either that, or the collection of some pre-buffered training set will be tied to the time-scales of the real world. While an AI might become very formidable in time, it’s not going to be *instantaneous*. I suspect it’s going to take every ounce of our high-grade human creativity to ensure that the learning and growth of an AI isn’t as glacially slow as any of the intiial-information-free processes that produced humans and their minds in the first place.

      (Edit: Also “arbitrarily competent” learning algorithms are also sort of problematic from a second-law perspective. You can “code in” all the information the AI would need to speed run the world in an “inductive bias” too, but that is the same cheat as starting with a box that contains a long-list of “take over the world flawlessly” instructions sans input. There are limits to what it is reasonable to expect a good learning-algorithm/inductive-bias to be able to give you.)

  2. Simon says:

    I think that the concern here that individual efficiency-improving changes to the economy lead to humans becoming worse off is largely based on a fundamental misunderstanding of how a competitive economy works. When someone finds a better/more competitive way to do something they can:
    -match the price of other producers and collect a profit
    -undercut producers, collect less (maybe zero) profit
    but, even if they choose the second option, there’s still a net gain in terms of total human wealth because the consumer is better off for receiving lower prices.

    And what if the product is a intermediate product received by other producers? Then they can collect the profit, or pass the savings on to consumers – still a net gain in total human wealth.

    Of course, it’s entirely possible to end up throwing the gains away and get a circular economy that benefits no one, but you can’t get there in the incremental manner suggested – you can’t get a decline in human wealth out of a sequence of steps each one of which increases human wealth.

    For a much better explanation see chapters 15 and 16 of David Friedman’s Price Theory textbook:
    http://www.daviddfriedman.com/Academic/Price_Theory/PThy_ToC.html

    For the same reasons “capitalism” doesn’t belong in the examples in Meditations on Moloch (“race to the bottom” also doesn’t belong, for simpler reasons – the people moving to the new country benefit).

    There are some other concerns that might be entirely valid, but I think should be logically separated from that concern:

    An automated corporation could end up serving its programmed goals to the detriment of humans. This is essentially the unfriendly AI concern just with the AI running a corporation.

    Difficulty of regulating automated companies.

    The fact that humans become unnecessary with sufficient automation. I pointed at this one earlier (in such a scenario human wealth becomes zero-sum from the point of view of human actions, with potential negative implications for politics and morality).

  3. Art says:

    Scott,

    I am not following your argument. In your example when battery company switches from supplying its product to the soccer mom to supplying it to another robotic firm,but I am still assuming that other firm is in the business of supplying stuff to humans. At least you are not giving me a reason to assume otherwise.
    Currently the demand by human consumers triggers human labor. I think you are showing how this demand can be satisfied without any human labor. But you I don’t think you are showing how that will lead to the elimination of human consumers as the ultimate reason why anything is produced.
    What am I missing?

  4. Protest Manager says:

    Run such an economy for a few hundred years and what do you get?

    Read “There Will Be Dragons” by John Ringo for an answer to your question. Since that’s the economy they have, at least at the beginning of the book. Heck, you can even read it for free., legally.

  5. Zach says:

    I’m surprised no one has mentioned this, but you’re describing is basically the background for the original Dune. Forget the prequels’ treatment of this event (which is crap), but in the original books we have some interesting hints at what took place (with even more detail coming from the technically-not-canon-but-better-than-the-new-canon Dune Encyclopedia).

    Basically, AI of some form existed, and it began to lead to some kind of human-less economy. The book itself says little about what precipitated the change, or how it manifested. There’s a line from one of the characters saying that

    Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

    There are vague references to massive societal upheaval (and the creation of new guilds and training to basically allow humans to do the things that computers used to do).

    The Encyclopedia takes this further, and says that what set things off was a hospital AI’s decision to terminate someone’s pregnancy. This someone just happened to be an important political figure (or at least married to one), and it basically resulted in a borderline-theocratic rebellion that led to a civil war. You ended up with an anti-AI crowd versus a pro-technology crowd. But what I found particularly interesting about the former was that they weren’t necessarily anti-technology, they were just pro-human. This led to an idea that humans should be improved to the point that they could replace that technology (which is what happens), and their society grows from there. The resulting society (a few thousand years later) is highly stratified and feudal, and there’s a certain irony to the same philosophy (about trusting people) resulting in a civilization that disdains democracy and the plebs more generally.

    • Kevin C. says:

      Also, from God Emperor of Dune:

      “The target of the Jihad was a machine-attitude as much as the machines,” Leto said. “Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments. Naturally, the machines were destroyed.”

      And, of course, Herbert almost certainly chose the name “Butlerian Jihad” in reference to Samuel Butler’s ideas in works like “ Darwin among the Machines

      Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life. The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

  6. Yechao says:

    You mentioned it was alot more difficult than the book contends, but it could be that we are still thousands of years off simulating anywhere close to the human brain. Maybe we have been looking at the brain using the wrong framework for the past few decades:
    https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

  7. Terrible Jokester says:

    Q: What do you call a Disneyland without children?

    A: A Nickland.

  8. Anon says:

    But in the long term it reduces the political problem of regulating corporations to the scientific problem of Friendly AI, which is really bad.

    Wait, Scott, are you telling me that corporations are superintelligent?

    • Murphy says:

      I think it’s more that they could be in future if certain events which seem likely to happen actually happen.

  9. Bugmaster says:

    There’s no need to invoke Ascended Economies and hyper-intelligent AIs; all of the potential problems that Scott describes are already here.

    As Scott says, regular companies that exist today would already gleefully pave over the Amazon if they could; in fact, they can, and they are actively doing it. Local regulation can’t stop them; it only slows them down a little. Humans are already decoupled from most economic entities. When I’m buying that new smartphone, I have no way of knowing anything about any of the people who built it; nor do I care. Maybe I wouldn’t buy the phone if I knew that its makers were paving the Amazon; but in this case, it’s the Amazon I care about, not the entities involved.

    There are already tons of positive feedback loops in the economy, e.g. the housing bubble. Usually, they come to a halt when natural resources are exhausted, or when the markets become super-saturated.

    We are already dealing with these problems, in a limited fashion; but positive feedback loops and negative externalities are a feature of the physical world we live in. Thus, until we change the laws of physics, the best we can do is mitigate the problems a little.

    Saying, “yes but a hyper-intelligent AI will make things hyper-worse” is just hand-waving. How do you know anything at all about what such an AI might do ? It’s easy to say, “of course the AI would be able to make the market super-efficient”, but, without some proposed mechanism for doing so, this is no different from saying, “of course, God will punish the sinners in the afterlife”.

  10. Russell says:

    Even today, a lot of corporations do things that effectively maximize shareholder value but which we consider socially irresponsible. Environmental devastation, slave labor, regulatory capture, funding biased science, lawfare against critics – the list goes on and on. They have a simple goal – make money – whereas what we really want them to do is much more complicated and harder to measure – make money without engaging in unethical behavior or creating externalities. We try to use regulatory injunctions, and it sort of helps, but because those go against a corporation’s natural goals they try their best to find loopholes and usually succeed – or just take over the regulators trying to control them.

    I just don’t recognise this world at all. I certainly don’t recognise the companies I have worked for. I recognise some companies sometimes do some bad things but you are in danger of making corporations seem like a less ethical version of the Third Reich.

    I do get of course that companies are in some ways definitionally amoral but their customers and (while they still have them) employees are not. I think companies are much more embedded in our moral universe than you give credit for. To take your Bitcoin Uber – that’s a taxi service I don’t plan to use because unlike the algorithm I do have some moral values.

    Remember the only purpose of investment is consumption. Or at least someone once said that and it seems manifestly true to me. So the purpose of a company is to allow someone to consume. And only people can consume so the idea it will all just get detached in a sort of robot round robin seems illogical to me.

    I sort of suspect I am missing the point of the article because I don’t really understand this whole em thing. (It wouldn’t hurt me to read the book I guess). And part of the reason I don’t get the em thing is I don’t believe either in the idea of uploading your brain or even the idea of strong AI. Robert Epstein had a good article on why the computer is not a brain nor a good analogy for one. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer It affirmed my existing beliefs so I found it very persuasive.

    I often think I’d like to play a computer at at game but make it an easy game and not a hard one like chess. I am hopeless at chess so it would beat me easily. But let’s play ‘Is that joke funny or is it not a joke at all?’ and I’d back myself against Deep Blue and its descendants quite happily. I’d even back my nephew and he is even dimmer than I am. And if computers can’t beat my nephew at a nice easy game like that why do we think they can ever evolve into em bearing devices?

    • Anon says:

      Edit note: I apologize if any of this seems overly snarky or if you take offense. That article you linked REALLY gets on my nerves.

      I think companies are much more embedded in our moral universe than you give credit for.

      You seem to think there is only one moral universe. That is wrong. Many people have self-consistent (and even inconsistent) moral frameworks that conflict with each other. Why do you think there’s been so much bickering over identity politics these days?

      Consider the Hobby Lobby Case, where the Supreme Court ruled that Hobby Lobby is not required to pay for its worker’s compensation. Whether you agree with it or not, no matter how the case was resolved, someone’s perspective will inevitably be “a moral violation occurred”.

      Currently Hobby Lobby is violating the morals of a bunch of feminists, Tyson is violating the morals of vegans, and capitalism itself violates the morals of communists. Considering that some companies already violate some people’s moral universes, how can you say that the “moral universe” a company inhabits won’t inevitably be outside the realm of humanity entirely? Especially considering that most people are against animal cruelty in factory farming, but continue to eat Tyson meat anyway.

      Consider the ems again, except making an em requires killing the human, and the business making the ems is on a B2B model. All the big businesses will buy the ems for cheap slave labor, and everyone except humanity makes a killing. Even if you eventually ban the em company and ems in general, the big businesses can offshore the ems in Croatian server farms and continue to make a killing while throwing humanity under the bus. And it’s not like the humans buying the stuff will care.

      To take your Bitcoin Uber – that’s a taxi service I don’t plan to use because unlike the algorithm I do have some moral values.

      I think you’re missing the point of the example. Most people would use the service, especially if it’s priced below most ridesharing services, because they don’t give two shits, and thus it can remain profitable and therefore extant. The only real barrier to entry would be Bitcoin being generally terrible from a UX standpoint, but that can be fixed with some Bitpay integration or such.

      And only people can consume

      This is the weak point of your argument. A lot of businesses run on the B2B (business-to-business) model where they sell their product to other businesses rather than people (Trello, Slack, Zendesk). Businesses aka corporations aka NGOs also can consume, and if they’re all running on algorithms no humans are really consuming anything in the process.

      Robert Epstein had a good article on why the computer is not a brain nor a good analogy for one. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

      I saw this article appear in the SSC comments earlier, and it still grinds my gears. From the article [snark in brackets is directed at Epstein, not the reader]:

      Senses [aka inputs], reflexes [aka outputs] and learning mechanisms [aka methods by which the outputs related to certain inputs can be modified] – this is what we start with, and it is quite a lot, when you think about it. [One might even think it’s quite enough to be Turing-complete.] If we lacked any of these capabilities at birth, we would probably have trouble surviving. [Almost like how a computer might have trouble processing data if it lacked an input method for that data. Hmm.]

      But here is what we are not born with: [This is the part where Epstein plays a game of semantic illusions and spits out a bunch of words he’s seen associated with computers with absolutely no rhyme or reason, in hopes that seeing one set of words in the “brain” box and another set of words in the “computer” box convinces you the boxes have different contents, even though the words mean the same thing]
      information [just neurons in configurations that correspond to how our senses create reflexes],
      data [synonym of information],
      rules [just consistent patterns by which the senses create reflexes],
      software [a computer doesn’t necessarily have or require software, but let’s throw this here in hopes of inducing semantic confusion in the reader],
      knowledge [just the ability to recognize sensory patterns that correspond to faces, and reflexes that tell us not to breathe in water and to be very afraid of snakes and spiders],
      lexicons [see: software],
      representations [just a nigh-hardwired narrative creation and interpretation ability, to the point where it’s a cornerstone of evolutionary psychology and present throughout cultures and mythologies],
      algorithms [just methods by which some senses trigger reflexes],
      programs [see: lexicons],
      models [just object permanence and the ability to generalize from past experience] [also synonym of representations],
      memories [whoa whoa whoa, where the FUCK do you think the word comes from you retarded git?? do you think the concept of memory wasn’t invented until the 1950’s when some programmers were working with RAM cards and started thinking “hey, these memory things do the same thing humans do when asked to recall data!” and then called human recall “memory”?? the concept of memory dates back to ancient greece and probably earlier, but you have the fucking audacity to say that humans don’t have memory? i could write a fucking screed on all the things that could only have happened due to human memory, like agriculture or exploration or language, but i need to move on before i give myself a fucking aneurysm],
      images [just the sensory patterns transmitted to the brain through the eye that can be retained and remembered at a future date via MEMORY YOU FUCKING TURDMUNCH],
      processors [just many neuron clusters that do specific jobs when processing sensory data and send signals to breathe and move and cycle blood],
      subroutines [see: programs],
      encoders, decoders, [these are both programs]
      symbols, [of course a computer doesn’t actually contain symbols at any point, but that doesn’t matter because i’m trying to make an argument]
      or buffers [see: subroutines] – design elements that allow digital computers to behave somewhat intelligently. [Of course, most of these design elements are imposed by humans so that the computer can do useful human work, but that doesn’t matter because I’m making an argument.] Not only are we not born with such things, we also don’t develop them – ever.
      [Except for when humans collect information and data about their environment, create and interpret rules of law and society, acquire knowledge of the environment and culture, create representations of how society and physical reality work, learn new algorithms and processes by which they can get food or find a mate, generalize physical and social movement through models, remember things better and with more clarity, see things they recognize every day, create new neuron processors through mitosis, and learn to encode and decode a language based on its sensory symbols. Then they develop those things.]
      [Of course that list doesn’t include a bunch of the superfluous computery stuff, e.g. subroutines and lexicons, but if they learn to code assembly or decide to construct a language, they’d probably develop subroutines and lexicons.]
      [Of course, some things will never be developed in a human e.g. buffers, but we have different mechanisms to conduct the same functionality e.g. short-term memory that is reinforced by repeated exposure to the same data, but can be forgotten if necessary.]

      I refuse to go deeper because I might actually punch a wall and people are sleeping in my house.

      It affirmed my existing beliefs so I found it very persuasive.

      lol confirmation bias much?

      I often think I’d like to play a computer at at game but make it an easy game and not a hard one like chess. I am hopeless at chess so it would beat me easily. But let’s play ‘Is that joke funny or is it not a joke at all?’ and I’d back myself against Deep Blue and its descendants quite happily. I’d even back my nephew and he is even dimmer than I am. And if computers can’t beat my nephew at a nice easy game like that why do we think they can ever evolve into em bearing devices?

      Deep Blue was designed to play chess. It was designed to do literally nothing else. It would be shitty at your joke game. It would be shitty at checkers. It would be shitty at Go. It would be shitty at literally anything that wasn’t chess. Of course you would back yourself against Deep Blue. I would back you against Deep Blue too.

      What about Watson, though? Remember him? Encyclopedic knowledge of so many things that he curb-stomped Jeopardy! for an entire season? I hazard he might actually be a challenge to you in your joke game, if it’s based solely on aggregate accuracy of the answers and not reaction time.

      If a programmer actually gave a shit about your challenge, I bet someone could write up a machine-learning algorithm, give it a base of 10,000 jokes and 10,000 things-that-aren’t-jokes, and do just as well as you in joke recognition. Considering you had over a decade to hone your joke-recognition techniques, and the computer had maybe a month, that’s pretty impressive. Furthermore, if you gave the programmer a decade, the program could probably recognize jokes in so many more languages than you can that any multilingual challenge would be a blowout.

      The contest you suggest is obviously designed to be stacked against computers, because they don’t have a knowledge of culture or jokes and thus wouldn’t be able to do it. Even so, I’ve noted a few ways that computers could STILL kick your ass if any programmer even gave a shit. As time goes on and processing power, storage, and transistors get cheaper, smaller, and faster, your ass seems less and less safe from the computers.

      By the way, saying that computers won’t “evolve into em-bearing devices” is like saying trains won’t “evolve into coal-bearing devices”. Evolution is irrelevant. The point is that someone will make an em, put it on a computer, and use the computer to simulate the em, the same way someone mines coal, puts it on a train, and uses the train to move the coal.

      • Russell says:

        Thanks for the long and thoughtful reply. I am a bit overwhelmed so I’ll just concentrate of the joke recognition game bit. The game I have in mind is that human comedians think up 50 brand new jokes and also come up with 50 things that look a bit like jokes but aren’t actually funny. Of course the jokes have to be new – no searching databases please! Me and my nephew would trash any machine (I believe) at they game of recognising which ones the comedians intended to be funny. The reference to Deep Blue was just a jokey reference to a machine that plays humans at whatever the game happens to be. The idea that I don’t know it was designed to play chess is, well, a bit snarky indeed. My point is if we play computers at the kind of games they are good at they will beat us – anyone for ‘speed working out square roots?’ But everyone is so impressed by the cleverness of designing a Deep Blue they forget to be deeply unimpressed by all the games no computer could ever play. I just feel AI is a more plausible form of Harry Potter magic which is fun for thinking of philosophical hypotheticals but I don’t myself buy the premise.

        And for clarity the confirmation bias thing was a joke! 😉 I was just trying to acknowledge that it is easy for a non expert (me) to find experts that agree with them but without the ability to critically assess them that is not hugely helpful in trying to find the truth. So I was interested to see what other people here made of it.

        • Murphy says:

          Sure, recognising what’s humorous to a human is hard.

          It’s also pretty hard for humans to figure out why a pack of chimps have just started cackling, another chimp would probably do far better at the task than you or your nephew.

          It’s not a problem of intelligence so much as cultural and language context.

          Currently machines aren’t very good at the game 7 minutes in heaven but that’s not so much an intellectual problem on the part of AI as a practical problem of robotics and material science.

          (relevant xkcd https://imgs.xkcd.com/comics/game_ais.png )

          • Russell Hogg says:

            Yes that’s right – and a cheetah would beat me a the 100 metres (assuming it didn’t stop to eat my nephew!). But I don’t hear anything to persuade me that an em is actually possible. In these debates that is just the premise of the argument and of course you need to accept it to have the really interesting discussion but I do think its bogus at its heart because the premise is false. Magic is impossible and so are ems (though I can’t prove either statement).

      • LHN says:

        What about Watson, though? Remember him? Encyclopedic knowledge of so many things that he curb-stomped Jeopardy! for an entire season?

        Watson won two televised games (admittedly against returning champions), and about two thirds of its test matches.

        • Matt M says:

          And it might be worth noting that Watson’s televised wins seemed almost entirely due to a speed advantage, not a knowledge advantage. The humans he faced seemed as if they knew all the answers as well, but Watson regularly beat them to the buzzer. Midway through the game, you could tell that the human players were fanatically trying to buzz-in first for every question, whether they already had the answer in their heads or not.

          Also interesting that Watson once faltered in Final Jeopardy with an obviously wrong answer (question asked for an American airport and he guessed Toronto), where there was no relevant time constraint)

      • Nornagest says:

        If a programmer actually gave a shit about your challenge, I bet someone could write up a machine-learning algorithm, give it a base of 10,000 jokes and 10,000 things-that-aren’t-jokes, and do just as well as you in joke recognition.

        As a programmer that’s occasionally worked in machine learning, this is harder than it sounds. It’d be easy to pick up knock-knock jokes and that sort of thing, probably a lot of other jokes in question-and-answer format, but I’d be willing to bet that reliably recognizing jokes that don’t include formatting cues would take some very deep natural-language magic, or a much larger training set, or both.

        That’s not to say that I agree with the article; I don’t.

        • Peter says:

          Yeah, I’ve done lots of work on computational linguistics projects, that sounds about right; my vote is for “beyond both”, I think “AI complete” applies here. Getting above-random performance is easy (especially if you can use cheap cues such as the formatting of knock-knock jokes, the presence of “Soviet Russia”, etc. – on the other hand if your things-that-aren’t-jokes are unfunny non-jokes eg “Knock knock, who’s there, Bob, Bob who, Bob the plumber.” then such cheap tricks won’t get you very far), getting human level results… well, with the current state of the art that’s a good joke in and of itself.

    • Peter says:

      The link: it could be called “why most of what you think your average desktop PC does isn’t like what a brain really does” – that covers everything discussed in the article and still leaves room for AI. It’s not a bad essay for psychology but doesn’t say anything definitive about the prospects for AI, beyond “if you simply extrapolate conventional non-AI computing you won’t get something like human intelligence”. There are multiple issues here.

      First, computers tend to be built to be useful – for things like breaking codes or playing music – the standard analogy is that we like to build fixed-wing aircraft rather than ornithopters because we’re usually more interested in effective and efficient flight than we are on precisely mimicking birds. Nevertheless, some people like to build ornithopters – apparently there are even hobbyists who are into it.

      Second: the picture we get of our own minds by introspection looks like it may be a severe distortion of what’s really going on; our model of our own thought processes necessarily has to be a gross oversimplification, otherwise, it would be too big to fit in our heads. Apparently if you dig out some old papers from the Turing era, a lot of the groundwork for how computers work was based on the pioneers introspecting about how they’d solve maths problems. It turns out that that groundwork led to a lot of useful technology.

      Third: AI tends to mean “things computers can’t do yet”. The history of AI seems to be a series of waves of enthusiasm followed by “AI winters” where people seem to be stuck, then new ideas pushing things forward. The first main wave of AI was “symbolic AI”, sometimes known as Good Old-Fashioned AI (GOFAI) which followed the sorts of ideas being disparaged in the article, and lo and behold, it failed to live up to the promise. A lot of people searched around for different ideas about how to do AI – or AI like things that we’re not allowed to call AI for some reason, like machine learning or computational linguistics, and I can tell you from personal experience that those things are quite different to program, test, debug, explain to customers etc. that most other computer things.

      Fourth: Imitating the biological hasn’t gone entirely out of the window – it’s like we’ve found some limits of fixed-wing aircraft and think we might have to learn more about how to flap wings. Artificial neural networks (ANNs) are a good example. The AI community has given up on ANNs twice and each time there’s been a resurgence when people find ways to work around what were thought to be crippling fundamental limitations. They played a big role in the AlphaGo win. You know those learning mechanisms the article talks about – they’re one of AlphaGo’s big secrets. Sometimes people even accidentally imitate the biological. There’s a technique called “TD learning” which was invented as a way to help programs learn by doing – it’s received a lot of interest from neuroscientists, and they even think they know what sorts of (natural) neurons are involved.

      Fifth: The “em” approach is analogous to saying, “waaah, flight is hard, we’ll never understand it, but it we copy a bird in microscopic detail we’ll have built something that flies.” This doesn’t appeal to me. Yes, if you make a detailed enough copy of something, then the copy should work as well as the original, I’m not a dualist. If you make a detailed enough simulation then the simulation should exhibit the same behaviour. I’m a chemist by training, and oh gods, simulating matter down at that level of detail is hard (especially if there’s water involved, like, say, in the brain. Water is freaky). And I think we’re likely to have to get it right. There’s some interesting work on evolutionary design of circuits, where circuit designs are tested on real hardware. The designs that come out of these processes… don’t look like something an electrical engineer would design, they wouldn’t work if you replaced all of the components with “ideal components”, they rely on the fine details of how the components work that engineers tend to thing of as “things which will mess up your circuit if you don’t take steps to prevent it”. Which makes me thing that the products of natural evolution and natural learning will have the same dependence on the fine details of the hardware. So our simulations will have to be really big, and between the breakdown of Moore’s Law, and that I actually see AI research as going somewhere, I don’t really see the em approach as being practical – it feels to me that making actual proper AI will be easier, and maybe the “technical difficulties” of getting accurate enough scans and enough computing power together will render the em project forever impractical.

      Sixth: a side point about representations. I said earlier that computers are good for playing music. How? Sometimes they store a digital representation of the waveform, where the position of the speaker at each moment in time is stored as a number. However, that’s not how my music collection is encoded – that would be too big. Instead, the encoding breaks the waveform into little slices, does a Fourier transform on each slice, and stores those parts of the transform that correspond to audible frequencies; the original waveform is lost, the waveform that ends up getting reconstructed isn’t an exact duplicate of the original. So each speaker position isn’t stored in a single byte (not even a set of 4 bytes) but is distributed over a large number of bytes, which are also simultaneously storing lots of other speaker positions. It sounds a bit like what the article is saying about how the brain “stores” stuff, except the brain’s doing something on a much grander. Philosophical question: is my computer storing representations of the songs that I like to listen to? If so, why isn’t my brain?

  11. SUT says:

    I maintain Paul Ryan [fiscal hawkism] is our best hope for the economy to stay grounded.

    It’s clear that the social democracies of the west have an anti-preference for the type of hyper-efficient, volatile and competitive values of the AI driven economy. Basically what I mean is France indicting Uber execs.

    But there’s one way to override the democratic impulse to build dig its canals with spoons, and that’s fiscal meltdown to the point of a Greece / Venezuela. If this were to happen to the US Dollar, the world would need to settle on a new standard currency and I think crypto-currency looks very attractive as world powers like China and Russia are wont to confiscate the wealth and freedom of “enemies of the state”.

    What this does is ultimately cleave democratic (“human”) institutions from exerting their monopoly of force on economic issues*. And anyone who says Yay! to this is a short-sighted fool that doesn’t know what’s coming to him in the new Hobbesian reality.

    All this to say: let’s be really really careful to keep a good credit rating for the dollar, even above other more humanist long-term wins (e.g. universal pre-K => smarter workforce!). While the value of participating in the officially sanctioned US economy exceeds the cost of tolerating the taxes and regulations, the dollar gives us the chance to shape our future for one built (if not by) then *for* humans.

    *: there will still exist $State_Bucks, but these will be more like medicaid insurance, only good for crappy services, honored sporadically, with a poor exchange rate to the crypto currency that can get you premium services, and move across regulatory regimes.

    • Aegeus says:

      If this were to happen to the US Dollar, the world would need to settle on a new standard currency and I think crypto-currency looks very attractive as world powers like China and Russia are wont to confiscate the wealth and freedom of “enemies of the state”.

      I think that a ‘real’ currency still sounds more promising. Leaving aside the fact that almost nobody is set up to do business completely in Bitcoins, or the fact that Bitcoin doesn’t have the transaction rate necessary to support the world economy and the technologies which can do so are still under development, there are about three other reserve currencies you could try before turning to Russia and China. The Euro, the Pound, and the Yen are all more popular reserve currencies than the Yuan or Ruble, and none of them have the problems that Russia and China do.

      Besides, I don’t get what threat you’re seeing from Russia and China. The mere fact that people are trading in yuan doesn’t make it possible for China to start confiscating their assets. Even if Microsoft starts doing their business entirely in yuan and only uses USD when they have to, they’re still based in the US. China can’t touch assets that aren’t in China.

      Also, this is all a moot point, because any financial meltdown that renders the dollar worthless will probably crash most of the world’s economy as collateral damage. The US’s economy is huge.

      • SUT says:

        The points not a critique of any particular currency. Holding money and debt of any sovereign faces the risk of fiscal insolvency in the west and confiscation in the east. Either way, the root malignancy is lack of trust in future actions, which makes it undesirable as a means of trade.

        If the question we’re talking about is how does the economy come to be directed by a heartless digital mechanism, it’s hard to beat the significance of international trade being becoming primarily done in a crypto-currency. What cryptology does is reduce trust to math, and that makes it appealing for private trade. But this is also appealing to the ai: in a digital currency, a country can’t use its physical presence or even its full might to default without consequences. Even to default against virtual agents; GE pensioneers can lose to their life savings to the DAO, and at that point there’s nothing we can do democratically about it.

        If that’s the a plausible setup for unfriendly scenarios, it makes sense to not let that happen, and the top level switch comes back to making societies trust-worthy for their creditors. Although this has broken down many times before and we survived, the urgency at present is because we’re figuring out the technology literally this decade. By technology I’m not talking about using it for every cup of coffee on the street, but as replacement for Fed auctioning 1T /year. Imagine if Bernanke was unaccountable to democracies in ’08, at that point you have no choice but to play the game of algos.

        tl;dr – Although we still have no idea when intelligence gets digital, making the dominant currency digital is quite possible today, and not unplausible.

        • Aegeus says:

          Yes, you face a risk of fiscal insolvency with any fiat currency, but only for that particular currency (if all major fiat currencies are going under, it’s time to invest in ammo and canned food, not Bitcoins). So if you think the US dollar is about to go under, you don’t move to Bitcoin, you start moving your assets to Euros or Yen or something. You need to explain why, when faced with this problem, a businessman would try jumping to a cryptocurrency full of unknowns, instead of one of the many other tried-and-true fiat currencies available.

          Also, the fact that people can lose their life savings to an algorithm without legal recourse seems like a reason that people won’t swap over to Bitcoin in the first place. Like, even with the Bitcoin community as small as it is, we have enough stories about people getting swindled (or just plain losing their coins) that you can read a new one on /r/sorryforyourloss every week. The trade-off between “trust is enforced by math” and “trust is enforced by long-trusted law enforcement and financial institutions” doesn’t seem like an obviously beneficial one.

          (And as I already pointed out, the currency isn’t responsible for most of this. Microsoft is not subject to Chinese law, even if the only currency they hold is the yuan. It just means that it would be easier for them to pay bills in China and harder to do it in the US. Likewise, changing over to Bitcoin+Ethereum might make your contracts perfectly ironclad, but it doesn’t make any other part of your business ironclad).

          I think Bitcoin will continue to be used for things it’s useful for – anonymous, trustless transactions outside the law – and fiat currencies, from one country or another, will continue to be preferable for literally everything else.

          • SUT says:

            > if all major fiat currencies are going under, it’s time to invest in ammo and canned food, not Bitcoins…instead of one of the many other tried-and-true fiat currencies

            Chile, Russia, Argentina, have all come back ruinous macro policies in a generation’s time. No reason we have to go MadMax.

            But, this time is different as all the alternatives of (US, Euro, Japan) are in the same predicament: an aging population with stagnant growth. You can’t diversify outside these demographic realities.

            As to MS being above chinese law despite running their business in yuan seems rather naive. Although Gulf states like the dollar, we occasionally freeze their assets. We rationalize our actions as supporting non-proliferation or human-rights, but different political systems will pursue other agendas through the same means. You don’t answer directly to the law, but can be harshly coerced when you depend on a foreign financial system.

            The specific vehicle – bitcoin, ethereum, the one they invent next year, etc – isn’t so important except that this is the first time that digital looks almost good enough to work as a substitute.

            But this general idea is not new at all: the majority of people in the world already do finance and emergency savings in a currency outside their political control. In fact they do this _because_ their country has no political control over the currency. This problem, of individuals losing faith in their societies looks to be on the rise. And just as in BNW, power gets willingly handed over to our robot overlords if we can’t manage ourselves.

          • John Schilling says:

            Although Gulf states like the dollar, we occasionally freeze their assets

            Worth clarifying, though, is that we can do this not because their assets are denominated in dollars, but because their assets are held in or flow through US banks. The way the global financial system is structured, a large fraction of gigabuck-level transactions and I believe almost all transactions of the form “Hey, everybody on Earth, who wants to loan/invest billions of dollars here?”, flow through banks in New York and/or London at some stage regardless of what currency the transaction is denominated in. And subject to seizure under US law no matter what currency the transaction is dominated in.

            It is possible, but rare at the gigabuck level, to deal in dollars without touching US banks. If this is done, the US government cannot directly freeze the assets in question; there’s no way to “turn off” a dollar by remote control. It is also possible to deal in yuan/RMB without touching Chinese banks, or Euros without going through European banks, with similar effect. Though obviously if a country’s banks are consistently trying to seize foreign assets you’d expect the foreigners would in the long run avoid that country’s banks and currency alike.

            Also obviously, an economic superpower can lean on foreign banks and even national governments to do its bidding and join its blockades, but that also is currency-independent.

  12. shin_getter says:

    Capitalism, the original “weak” super intelligence is not safe with regard to human values and have most of the failure modes of AI when given sufficient capacity.

    Now, Libertarians suggests that markets can encode most of human values however in practice a number of theoretical and practical barriers prevents it. In a world where the “economy” dominates, Values that is not encoded within the law and meme space that defines the economic system is ruthlessly destroyed in the long optimizing run.

    All it takes for the super intelligence to operate is combining the ability to control physical resources, and power in the memetic space with development of justifications.

    You like your functioning social relations? The system would not think twice to replace it with one full of narcissism and envy so they could sell more widgets. You like having meaning and independent sense of self worth? The system would love to replace that with a more consumption and worked sense of reality. The system have dominant control of information and concept flows and is using it to further its goals. An entire, Massive Industry in the modern world is about beaming (still weak) mind control rays into people’s heads. Paving over forests is so 19th century.

    Try to control it with government? The system figures out how to regulatory capture. Try to fight it with ideas? Super intelligence figures out good ideas to fight if it doesn’t just cheat and intercept the signal before delivery.

    Try to run away from it? You might not be interested in market economy, but the market economy is interested in you, and many a static “primitive” society have be “liberated” from their backwards culture via many methods.
    ——————–
    The end game is always the wirehead that has the sole motivation of justifying more economic action. Imagine a brain whose only action is to click on the buy button for all of eternity.

  13. Anomaly UK says:

    I guess the nearest existing thing to an ascended economic entity are the (trivial) bitcoin lottery bots. When I was playing with BTC they were the majority of the transaction volume, don’t know if that’s still the case.

    Basically, you send btc to an address, and it will either send you back more, or nothing, depending on some random source (which can be the blockchain itself to make it verifiably honest).

    All it would need to do would be to pay for its own hosting automatically, and that would be that. Of course, it’s customers are likely to all be human, so there’s no closed loop, but if it were doing something a bit more useful…

  14. jordan says:

    I mean, I think the dangers here are mostly just the dangers of artificial intelligence. The potential implications of AI are terrifying, and when I see shows or movies about how nice robots are and how prejudiced and mean the humans are to them I can’t help but gape my mouth open in disbelief. I think some sort of successfully self replicating machine is our second greatest existential threat, the first being some sort of DNA modified human-like creature that is way smarter than us and just totally wipes us out/ turns us into factory farm meat. So I guess what I’m saying is basically that your talk of the dangers of an “ascended economy” run by AI are basically the dangers of AI, they just sound worse because they have the word “corporation” in them, which has kind of become a trigger word for “sooooooo evil.”

    But all that said, corporations, or laws, or even traditions could all be seen as a very simple form of “artificial intelligence” that guides human behaviors while not being “human” itself. In some ways, our universal acceptance of something like the constitution is all of us agreeing to bow down to an artificial intellect of sorts- the conglomeration of our beloved founding fathers ideals brought together and born anew into an almighty piece of paper.

    (I like laws/ constitutions btw, I just think they are kind of like a stupid form of AI that doesn’t self replicate and depends a great deal on human imagination to work properly- eg the sort of AI I feel totally comfortable with.)

    Anyways, AI is scary, I hope the people working on it decide to, like, go explore one of Jupiter’s moons instead. I feel like working on AI is in some ways like trying to build a bomb big enough to blow up the entire earth “just in case, you know, we need to blow up Pluto sometime in the future. And… wouldn’t it be cool!!!?”

    Anyways, love your blog, it’s a great read and sometimes I wonder if YOU are an AI because all the research/ energy that goes into writing this along with having a full time job seems to me to be an almost impossible feat. Thanks for all your work-

  15. FullMeta_Rationalist says:

    I was browsing HN when I came across an imgur post. How to Win at Monopoly and Lose All Your Friends. I thought it was relevant to this discussion.

    The strategy relies on building a monopoly. Not a land monopoly, but a housing monopoly. There’s a fixed supply of 32 houses total. According to the manual, increasing the supply is impossible/forbidden. Often, nobody is aware of this rule.

    By buying as many houses as one can (don’t upgrade to a hotel), you can starve the competition of housing. Ideally, you’ll have 2 monopolies (6 lots) by the end game. At 4 houses per lot, that’s a maximum of 24 houses. I.e. 75% of the housing market. Eventually, the competition won’t be able to upgrade their own monopolies, and therefore will be starved out of the game through your own obscene rent.

    Interesting quote:

    Monopoly was, in fact, a rip-off of “The Landlord’s Game”, a game designed decades earlier by Elizabeth Magie, a proponent of Georgism (https://en.wikipedia.org/wiki/Georgism). The game was designed to teach children about the inherent unfairness of the capitalist land-grabbing system, and demonstrate how it enriches landlords while impoverishing tenants. Interestingly, it also included rules for a co-operative, anti-monopolist “Prosperity Game”, in which victory was achieved when all players had at least double their original stake.

  16. Doctor Mist says:

    Imagine a mining-robot company that took one input (steel) and produced one output (mining-robots), which it would sell either for money or for steel below a certain price. And imagine a steel-mining company that took one input (mining-robots) and produced one output (steel) which it would sell for either money or for mining-robots below a certain price. The two companies could get into a stable loop and end up tiling the universe with steel and mining-robots without caring whether anybody else wanted either.

    Numbers, or it didn’t happen. Both these companies are making a profit? I’m reminded of the natives of the Isles of Scilly who are said to eke out a precarious livelihood by taking in each other’s washing.

    • Anomaly UK says:

      But they are not taking in each other’s washing, they are mining steel. In principle they are profiting measured in steel, and they can exchange their surplus for whatever it is they are trying to accumulate.

      If they are converting their surplus steel into cash, then someone–possibly a human–is getting some benefit by that exchange. But if they just accumulate steel, or reinvest the surplus into more robots roaming the solar system, it’s not so clear

      • Doctor Mist says:

        Well, OK, but.

        Mind you, I’m on record as tentatively fearing some real-world equivalent of the paper-clip monster; I’m just not sure this is the likely route. If we start with modern-day corporations, which (in theory) have to turn a profit at least in the long run or they get acquired or reorged, I’m not sure how they mutate into something whose terminal goal is not cash but rather steel. If they both want cash, I don’t know where it comes from unless they are selling steel (or robots) to a third party, which wasn’t Scott’s scenario.

        John Schindler pointed out that there are non-profit corporations, which might be exempt from this quibble. Also, I guess I could imagine both corporations being like Amazon and going on for years without turning a profit in the hopes of building market share. If this happens in an environment where GDP is doubling every day, I guess a lot of tiling can happen before the chickens come home to roost.

      • RCF says:

        But which company is measuring their wealth in steel? The steel company doesn’t want steel for the sake of having steel, it wants steel for the sake of selling it for a profit. The robot company doesn’t want steel for the sake of steel, it wants it to make robots.

        I suppose one possibility is that if each company is not measuring their actual bank account, but their projected bank account, and is making their projection without taking into account the fact that there’s no external source of funds.

  17. RCF says:

    Isn’t a Paper Clip Maximizer just a stable loop with one member?

  18. dalitt says:

    This is actually the best rhetoric I’ve heard as to why the Friendly AI problem is difficult. Most people accept that corporations are difficult-to-regulate beings with values that are not aligned with standard human values. Why should AI be any easier to regulate?

  19. AL says:

    Am I an instance of me, or am I me?

    I’ve been put under for surgery in the past (twice). Did I die then, and was what followed merely a reinstantiation–a copy of me that thinks it’s the original? Or am I reinstantiated every night, when I go to sleep? Or every literal instant, if time itself (and my own sense of continuity) is but an illusion?

    Is the me of yesterday dead? What connection does the I of now have to the me of tomorrow?

    Do I have a soul? It seems to me that my laptop does not have a soul. It seems to me that any copy of me, running on my laptop, or check that, a perfect copy, even, in living breathing flesh, would not be me. I can tell you exactly why: because I’d still care about myself. My copy might go on to live a wonderful and rich life, but it wouldn’t be me. I’d rather that wonderful life be mine, not some copy of me, living in my laptop or even adventure traveling the swankiest parts of the world in physical form. I can’t taste my copy’s perfect margarita, however much he seems to enjoy it.

    It worries me that there are thinkers who like the idea of copies of me living inside a paradise in my laptop. It worries me that many of those thinkers think those copies are me. I think those copies, however fine, are participants in Bostom’s Empty Disneyland. And I think Empty Disneyland is the very bleakest of possible futures.

    I used to think AI and in particular superintelligence would be a new form of life. Now I’m not so sure. Now I worry that superintelligence is just likely to be whirling processes spinning inside my (future-tech) laptop. Subroutines. Copies. Not even worthy of the word dead, because they were never even once alive.

    Yet all the arrows seem to point in that direction. The only hope I cling to is that the future is unknowable–that these predictions will prove to be wildly off. Please let it be so. I’d prefer to stay me as long as possible.

    • Vamair says:

      I don’t think I understand “self” enough to be reasonably sure that a copy of me is or isn’t me. On the other hand, I really don’t see a reason for a copy not to be a person. It may be a different person, but still a person that feels and prefers and thinks and whatever else. If you don’t get into a Disneyland it doesn’t mean the Disneyland is empty.
      I still think that the ems of Hansonian future will be stripped of anything human in a heartbeat and will turn into algorithms for their jobs, but that doesn’t have anything to do with them being copies and everything to do with them being cheap to save and copy.

      • AL says:

        “If you don’t get into a Disneyland, it doesn’t mean the Disneyland is empty.”

        Well put.

        The more I ponder this notion of me, the more I find myself thinking having a persistent, physical, flesh and blood body is very, very important. That may be a much more essentially human part than the copyists realize. We’ll see, I guess…

  20. daronson says:

    Scott, are you sure you have thought through the relationship between I-II and III? Because I see two separate things going on here.

    They are (roughly I-II): things like hedge funds that work with the stock market are functioning under conditions of intense competition and time constraints and have high complexity, hence they are a likely place to accidentally spawn an intelligent AI.

    and III: Things like hedge funds are functioning under conditions of intense competition and time constraints and currently cause externalities due to incomplete alignment of the economy with the common good.

    I agree that both of these things are related to the economy and both are/would be bad, but I’m not sure how they are linked, other than the phrase “things like hedge funds that work with the stock market are functioning under conditions of intense competition and time constraints”. I don’t see how a paperclip factory made by a hedge fund would be of a different nature than one caused by a runaway funny cat video generator. In particular, I find it difficult to imagine a scenario where people recognize that a (not yet intelligent) economy-modifying AI is definitely about to go rogue but not be able to prevent it because “economics”. This sounds to me about as reasonable as saying that a weather-modification AI is more likely to go rogue because the weather already causes unpredictable and harmful things like hurricanes.

  21. abner says:

    Every credible sci-fi future I come across now is so completely alien that I feel like I have no stake in them whatsoever – and yet there’s always the impression that anything we can imagine isn’t weird or alien enough. The old quaint “space exploration” stuff with recognizable humans was such a valuable post-religion; this stuff is completely devoid of succor.

    • Aegeus says:

      Have you read Accelerando, by Charles Stross? That book includes bizarre post-singularity stuff like superintelligences hanging out around the sun disassembling planets for computronium, but it focuses on characters living on the outskirts where the technology is still human and comprehensible. A really good singularity story.

  22. jes5199 says:

    I’m having trouble swallowing that you could have a meaningful economy decoupled from the physical world – it seems like irrelevant epiphenomena. What hardware are your Ems going to run on? Imagine you have self-reproducing solar/computronium flowers that are the hosts that your Economics 2.0 runs on. Those compuflowers still have to compete for atoms and energy with anything else in the physical world – would they not be overrun by some gray-goo-flowers which, instead of participating in simuleconomics, have application-specific circuitry which can only calculate the most efficient way to disable and overtake computronium.

    Is there some proposal of what the Ems are so busy doing that can cause their host machines to outcompete threats that are the AI/robot equivalent of viruses, bacteria, and insects?

  23. Ruprect says:

    If processes operate within the legal framework, within the political system (and if the goal is to make profit, they must) then surely they are vulnerable to changes in the political system. If we can change the environment in which the process lives to something quite different to the one in which it developed, there is no reason to think it could respond.
    To the extent that we control the overall environment, we control the process, at least in terms of a veto.
    Where we don’t control the political environment, on any level, we are already lost.

    This is why politics must (ultimately) be a slave to ordinary experience, rather than the other way around. Defend your mother before (more abstract) conceptions of justice.
    (The only solution is consideration for others as a non-instrumental virtue – it’s the solution because it transcends solutions – our goals are no longer slave to an intellectual process.)

  24. Mengsk says:

    Odd thought, but one rule which might constrain this is to make the corporation’s goal not to “maximize shareholder equity”, it’s to keep shareholder equity above a certain value, which is tied to the number of humans shareholders that exist.

    Aside from nightmare scenarios where the Ascended Economy, facing losses, decides to kill its shareholders in order to “lower its target”, a system like this would not fall into too extravagant wastefulness a la mining glacier water from Pluto.

  25. Alex Zavoluk says:

    I would think a closed loop smaller than the whole economy is always sub-optimal, since you’re missing potential customers/suppliers.

  26. gathaung says:

    Firstly, we have already set up part of this:

    A corporation can be viewed as an agent that uses human processors. Human processors are used because current AI sucks. The corporation is an agent independent of the people running it (CEO, board, etc) because the involved humans are bound to “maximize shareholde value”.
    The concept of “fiduciary duty” is taken really seriously in western culture, almost as much as the hippocratic oath: A lawyer of banker or company official may refuse to do business or take a job if he morally disagrees with his clients, but current social consensus is that he is not allowed to betray this duty (e.g. Tim Cook deciding that Apple donates $100bn against Malaria).

    Just like a doctor may be a soldier and kill people, or may refuse to treat an evil dictator, but is just not supposed to treat and secretly poison the tyrant.

    Now, about the ascension. There is a fun concept in “shareholder democracy” that I would rather call “decoupling”: pyramid or even cyclic ownership structures.

    Let me explain in the context of defrauding a couple of stockholders. Two friends Alice and Bob found two companies, “CorpA” and “CorpB”. Initially, they are sole owners and CEOs of their companies.

    Now, Alice sells all her stock in CorpA (which also raises capital): 51% to CorpB and 49% to outside investors. Likewise, Bob sells all his stock in CorpB, 51% to CorpA and 49% to outside investors. On all stockowner meetings of CorpA, Alice is confirmed as CEO: After all, the majority is carried by CorpB, and the votes are cast by Bob. Likewise in reverse. Now there is no reasonable way of restricting the amount of money Alice and Bob get payed for their duty as CEO.

    Suppose you want to perform a hostile takeover of CorpA and CorpB. You buy all the stocks. You legally own (indirectly) 100% of the cashflow rights to both corporations. You have no way, outside of lawsuit, of removing Alice and Bob. CorpA and CorpB have decoupled, or ascended.

    The weaker forms, cross-ownership with small amounts of stock, and partially subsidiary daughter companies, are very widely and shamelessly used in the real world.

    A cynic person would say that this has already happened on a large scale: Not two strongly (illegally) cooperating CEOs, but instead a whole loosely cooperating “executive class”.

    What laws are there about this kind of decoupling? This depends on the country.
    AFAIK, in the US, cross-ownership and cyclinc ownership above 5% are illigal. AFAIK, in Germany, the threshold is 25% (which is totally laughable). AFAIK, these laws are seldomly enforced. I once tried to scrape the SEC homepage to look for decoupled structures, but the data format sucked too much. If anybody here is willing to code a scraper and parser for the legally mandated reports of “major shareholders” from the SEC homepage, and are interested in writing a joint paper about this, please contact me (reply to comment). I would then write the graph-theoretic code (and light math theory) for finding ascended/decoupled companies (of course, you are invited to do this alone, no attribution required).

    This kind of laws is of course totally helpless: it is trivial to set up an ownership graph, where each individual cycle is below 5%, but the total network is decoupled.

    IMHO, the only feasible legal solution is the following:

    1. All stocks carry the same vote: Shennigans like differential stock classes are outlawed.

    2. Shareholder voting gets legal protection: All contracts about “I/CorpX will vote the following way” are void, just like you cannot sell your vote in political elections.

    3. Voting rights carried by non-natural persons (aka corporations) get transitively transfered to their stockholders. This means: I own stock in CorpA, CorpA owns stock in CorpB. Then I get to vote at the CorpB meetings, not the executives of CorpA.

    PS. Compare to the political voting systems, especially iterated voting on different scales, with winner-takes-all. See US or british political voting.

    For an absolute majority in the national level, you need 25% of the popular vote (if your voters live in the right places). Add another layer, and you need 12.5%. Add too many cycliical layers, and you need 0%.

    • LWNielsenim says:

      Isn’t it foreseeable that non-human asset-holding entities like Oxford University, the Harvard Corporation, and the Howard Hughes Medical Institute will fight humanist ownership reforms tooth-and-nail?

      Surely they are institutionally motivated to oppose humanist ownership reforms, and their public justification can be as simple as “we are not evil” (a justification that has worked pretty well, so far, for Google Incorporated).

      Moreover, because the oldest, most elite universities have already survived longer than any of the governmental institutions that (ineffectually) seek to regulate them, their principled opposition to humanist ownership reforms foreseeably will be well-planned, and implemented without even the moral compunctions and social restraints that democracies and markets impose (however imperfectly).

      The University of SkyNet LLC … yikes!
      No wait, I mean, how do we invest? 🙂

      • gathaung says:

        No need to posit AI. Sorry, I misread your comment.

        Yeah, current business and universities would fight a humanist ownership reform tooth-and-nail.

        Different (voting, nonvoting, supervoting) stock classes are widely used in order to allow founders to retain control over companies. Partially owned daughter companies are also pretty popular; part of this are tax reasons, I believe. Cross-ownership deals are used at least for two reasons: ensure cooperation (both companies have a stake in the others success) and protect both against hostile takeover. In my personal opinion, both should be illegal, for anti-trust (monopoly), fiduciary, and anti-ascension reasons.

        Organized crime and borderline organized crime uses layered and cyclic ownership structures to defraud investors and retain control (if power is more important than wealth). Example paper.

        The political battle to get meaningful restrictions, especially across international borders, on circular ownership is probably currently impossible.

        About the illegality of the entire thing. Extracting resources from voting-controlled companies is definitely a betrayal of fiduciary duty, but this is almost impossible to prove.

        IANAL, but the relevant US law directly restricting cyclic ownership seems to be 15 U.S. Code § 80a–20, (c) and (d). The threshold appears to be 3%.

        IANAL, an neither a banker, but the law appears to allow companies to buy all the stocks they want, as long as they don’t know whether cyclic structures arise. If they know, then they have to sell. This was my secondary reasons for trying to write some code: If one finds and reports strong cycles, then the invovlved companies may be forced to sell. This would be an awesome troll.

  27. Ariel Ben-Yehuda says:

    > And then there are the weirder problems, like ascended corporations hacking into the stock market and wireheading themselves. When this happens, I want credit for being the first person to predict it.

    It already happened. It’s called Enron (and WorldCom and Lehman Brothers and…).

  28. The Nybbler says:

    So how is this different when the economy does include humans? Is an economy of machines satisfying their imperatives really any different than an economy where humans satisfy ours? (I mean from an outside viewpoint; obviously there’s a difference from a human viewpoint). This is headed into the nihilistic “a zygote is a gametes way of producing more gamtes” territory, I guess.

  29. LWNielsenim says:

    ChrisA says: “Everyone is making this a lot more complicated than it needs to be. There are only two future states when (if) we reach the point of strong AI — Disaster and Nirvana.”

    This view of AI is congenial to folks who conceive of economic markets as competitive, efficient, and rational.

    More explicitly, a quintessentially rationalist chain of reasoning is as follows: markets create competition; competition fosters efficiency; efficiency requires rationality; strong AIs will be supremely rational; hence the seemingly inexorable futurological prediction that strong AIs will control future economies.

    But didn’t a similar chain of quintessentially rationalist futurology lead inexorably, yet disastrously, from the rigorous theorems of von Neumann and Morgenstern’s Theory of Games and Economic Behavior (1944), to von Neumann’s arms-enforced pax Americana, to the insanely wasteful and horrifically risky arms race of Herman Kahn’s On Thermonuclear War (1960), to the ultra-rationalist neoconservative dystopia of Phyllis Schlafly and Chester Charles Ward’s Strike from Space (1965)?

    It wasn’t rationalism that broke the chain of futurological reasoning that spawned the thermonuclear arms race, was it? Wasn’t the arms race ended, instead, chiefly by the embrace of humanist cognition, and especially by the medium of humanist humor? The classic exemplar of which is the Stanley Kubrick / Terry Southern humanist masterpiece Doctor Strangelove (1964).

    Now look, boys, I ain’t much of a hand at makin’ speeches, but I got a pretty fair idea that something doggone important is goin’ on back there. And I got a fair idea the kinda personal emotions that some of you fellas may be thinkin’. Heck, I reckon you wouldn’t even be human bein’s if you didn’t have some pretty strong personal feelin’s about nuclear combat.

    Tough as it may be for strict rationalism to accommodate (or even appreciate), there’s plenty of evidence that “the past is prologue”, in that ever-stronger AIs are being artificially realized (already), not as “strongly rational AIs”, but rather as strongly empathic AIs.

    This is a world in which humanistics (at least) can reasonably foresee that the deep neural nets of strong empathic AI entities will understand themselves (and one another) about as skillfully as we biological intelligences understand ourselves (and one another).

    Which is to foresee that strong AIs (both technological and biological) will understand themselves (and each other) surprisingly well, even marvelously well, but not perfectly well, or even coercively well. And this will reflect limits of rational cognition that are built-in to the very foundations of mathematical logic; limits that already are necessitating the technological abandonment of rule-based modus ponens-implemented strictly rational AIs.

    For beginners a recommended set of texts for studying strongly empathic cognition, both artificial and biological, is Terry Pratchett’s (best-selling) Discworld novels. What other works do SSC readers recommend, as effectively teaching both empathic cognition and empathy-ascended economics?

    • Samuel Skinner says:

      But didn’t a similar chain of quintessentially rationalist futurology lead inexorably, yet disastrously, from the rigorous theorems of von Neumann and Morgenstern’s Theory of Games and Economic Behavior (1944), to von Neumann’s arms-enforced pax Americana, to the insanely wasteful and horrifically risky arms race of Herman Kahn’s On Thermonuclear War (1960), to the ultra-rationalist neoconservative dystopia of Phyllis Schlafly and Chester Charles Ward’s Strike from Space (1965)?

      No. The US would have nuked the USSR in 1946 or after the air defense was strong enough (maybe 53 with Nike) if it was going purely on sociopathic rational interest.

      Of course if countries were sociopathic rationalists, World War 2 would never have happened because the Germany would have been dismantled after the WW1. Total ruthlessness is a winning strategy if everyone knows you are utterly committed to it- just ask the Mongols.

      • LWNielsenim says:

        “Total ruthlessness is a winning strategy if everyone knows you are utterly committed to it.”

        Total ruthlessness is a winning utterly losing strategy if everyone your colleagues or your friends or your dates or your family — even your pets (empathic trigger warning) — knows you are utterly committed to it.

        That’s the evolutionary reason why empathy is neurotypical, right? Among humans apes primates mammals? Because it signals lack of ruthlessness?

        • Samuel Skinner says:

          Boy, a good thing I didn’t give an example of total ruthlessness. One that was a winning strategy in the real world.

          • LWNielsenim says:

            But isn’t that like asserting “one cool winter disproves anthropogenic global warming”, or “one flawed experiment disproves the HIV-AIDS link”, or “one long-lived uncle proves that smoking doesn’t cause cancer”?

            Isn’t the weight of the scientific evidence overwhelming on the opposite side in regard to all four realities? Those four realities being: sociobiological, cultural, economic, and even etymological centrality of “ruth”, thermodynamic reality of global warming, HIV-AIDS epidemiological causality, and smoking-cancer epidemiological causality?

            People calling themselves “rational” vehemently deny all four of these realities, for cherry-picked reasons that don’t seem all that rational to most folks (including me).

          • Samuel Skinner says:

            I’m not sure why you think it is one example. The Mongols were infamous for ruthlessness, but they aren’t the only example in history. The problem generally isn’t “people aren’t ruthless in politics”, but “people don’t have enough power to carry things out”.

            Since the case being talked about involves having enough power, that doesn’t come into it.

          • hlynkacg says:

            But isn’t that like asserting “one cool winter disproves anthropogenic global warming”

            It’s more like dismissing the “this is the hottest year on record” type hysteria by pointing out that last year was hotter.

          • LWNielsenim says:

            Sociobiologist Ed Wilson’s part-novel/part-parable Ant Hill (2010) is commended to SSC readers as a scientifically informed post-Strangelove meditation upon the social, economic, and ecological dimensions of a world deficient in “ruth”.

            Because apes deficient in ruth — no matter how rational — can’t be responsibly trusted with thermonuclear arsenals or global-scale carbon-burning technologies, isn’t that a plain lesson of history?

            We apes are lucky to have evolved as much ruth as we have. Some powerful evolutionary forces surely were at work. 🙂

          • hlynkacg says:

            Can you define “ruth” or are you just spouting byronesque anti-enlightenment nonsense?

          • Nita says:

            ruth (noun)

            1. pity or compassion.
            2. sorrow or grief.
            3. self-reproach; contrition; remorse.

            1: compassion for the misery of another
            2: sorrow for one’s own faults : remorse

          • Hlynkacg says:

            @Nita
            Thank you, I learn something new everyday.

            I did a quick search but the only references I found were to the biblical woman’s name.

          • LWNielsenim says:

            The Google Ngram Viewer (which is case-sensitive) shows a secular decline in ‘ruth’ usage since 1700, such that present ‘ruth’-usage has diminished to about 1/6 of 18th century ‘ruth’-usage.

            Like many people, I learned ‘ruth’ as a child, from works about chivalry:

            “`Tis the badge of Tete-noire, the Norman!” cried a seaman-mariner. “I have seen it before, when he harried us at Winchelsea. He is a wondrous large and strong man, with no ruth for man, woman, or beast. They say that he hath the strength of six; and, certes, he hath the crimes of six upon his soul. See, now, to the poor souls who swing at either end of his yard-arm!”

            The passage is from Conan Doyle’s The White Company (1891), a book that was Conan Doyle’s personal favorite among all his works (including Doyle’s Holmes/Watson stories).

            In short, in previous generations everyone appreciated ‘ruth’ as a fundamental ideal of chivalry. But nowadays, as ruthless corporations gain greatly in power, the public appreciation of chivalrous ruth is diminishing in inverse proportion to corporate power-gains, to such a degree that many citizens (younger ones especially) literally do not know even the meaning of the word.

            Now in the 21st century, how shall we rationally account our collective ruthloss (meaning “ruth-loss”), when we are unconscious of it?

            Shall we rationalize that “market forces have produced our ruthloss, and so our ruthloss must be righteous”?

            Rationalizing our ruthloss is fundamentally unchivalrous, isn’t it? 🙁

    • Anonymous says:

      Hi, John Sidles!

    • ChrisA says:

      “This view of AI is congenial to folks who conceive of economic markets as competitive, efficient, and rational.”

      I don’t understand how your conclusion follows from what I said – I don’t think strong AI needs to be anything in particular ethically (it could be programmed with human like ethics, or no ethics or some kind of synthetic ethics.) I don’t know what is meant by efficient or rational either in this context either. The rest of your example is simply anthropomorphism – you are trying to draw parallels between the actions of humans working in groups (with all their constraints in thinking, programmed morals, and physical limitations) and a single standalone super-intelligent AI. An AI would not have fought the cold war the same as humans is about all we can say.

      Again – I see many people trapped by local thinking (even Hanson). Today people have some fears about being exploited by other people, but worrying about that problem in an AI world and developing strategies for it when there are all these other much bigger ones associated with the AI problem just seems silly.

  30. bean says:

    I’d object to one of your examples, namely the one on airliner maintenance. First, the author doesn’t seem to understand the various kinds of maintenance that get done. Second, he hasn’t done the actual numbers on problems after domestic vs foreign maintenance. When you’re doing the sort of work involved in a C-check or D-check, there are a lot of moving parts involved. Even the best facility is going to have problems, and the sort of problems discussed here cost operators lots of money, in lost revenue and (probably) fines. Serious quality problems at foreign maintenance shops are going to show up very quickly.

  31. Demosthenes says:

    Can you really call it futurism if you’re just channeling the thoughts of 30s and 40s socialists?

    • TD says:

      I didn’t think that 30s and 40s socialists had much to say about AI/ems/automated companies.

      (Marx had his “fragment on machines” in the 19th Century, but even there what he’s describing is more like a Rube Goldberg extrapolation of mechanization where the workers play the part of “conscious linkages” in pulling the levers to direct processes. AI/ems/whatever don’t need the proletariat as conscious linkages.)

  32. Saal says:

    I think this focus on stable loops is barking up the wrong tree. The key idea behind Land’s work (I think?) can be summed up as: natural selection works on everything. Absolutely everything. It’s not necessary for an ascended economy to cut every single human out of the owner and consumer roles in an economy for an ascended economy to result in a Bad Situation for Most Humans.

    Once you’ve replaced all the workers and management (everything in between the capitalists and the consumers), what incentive is there to provide a UBI or something of that nature to the disenfranchised? I keep reading “post-scarcity economy”, but scarcity is marginal (hence we still have “the poor” in countries where the poor have standards of living several times that of yesteryear’s aristocracy).

    If the capitalists owning the American Conglomerate of Ascended Corps instruct their CEAlgorithms to provide a UBI to the consumers of America so they can buy the things ACAC produces, they’ll be outcompeted by the Chinese Conglomerate of Ascended Corps that has their algorithm poison the water supply of their nation with chemicals that reduce fertility to sub-replacement levels, buy up all the land and resources freed up over the next generation, go ahead with UBI for the remainder and use half the reduction in expenditure to buy yachts, the other half reinvest in the company.

    The CCAC, in turn, is outcompeted by the RCAC (Russian CAC), which does the same thing, forgets a UBI altogether, and lets the majority of the population starve or survive at subsistence so the capitalists can trade strings of bits at 50 TB/s.

    Now abstract away the countries and make the capitalists anonymous satoshi hoarders performing all their business through 25 layers of offshore shells over cryptomarkets and you’ve effectively written off most govt intervention.

    Is this how it would go, exactly? IANAE or any kind of expert, so I doubt it. But I would expect something much more like this than an accidental loop dooming us all. And if we assume the algorithms are advanced enough such that stockholders who give more control over to their CEAs outcompete those who remain more hands on, certain weird shit could occur.

  33. ChrisA says:

    Everyone is making this a lot more complicated than it needs to be. There are only two future states when (if) we reach the point of strong AI – Disaster and Nirvana. Like the difference between unhappy and happy families, disaster can occur in myriad ways – [the world could be destroyed by paper clip optimizing AI or uploaded individuals edit away their ethics and become giant killing machines to control all the worlds resources or we can be infantilized by hedonic optimising AIs seeking to pump us all with happiness drugs etc etc]. There is only one nirvana though – all humans achieve god like status with access to any resource needed. Its going to be very hard to get to this point though, but the challenge isn’t the type of economy we have after the transition (it’s almost irrelevant), it’s managing the transition itself. The hyperventilating over the particular disaster scenario where an AI run economy enslaves us or copies of us for some reason (what could we really provide for an 1,000+ IQ AI?) seems like a pathetic hijack of this mega challenge to make a local political point around some trivial fight over minimum wages or something. Like in the 1950’s where you would see some green alien monster salivating over the nubile young maiden on the cover of your Amazing Stories magazine; the scenario was of current interest to the audience of teen boys reading the magazines….but it is probably not the highest priority to worry about if you did meet an alien.

    • Tsnom Eroc says:

      Makes sense. A middle ground is that emotion is somehow a mathematically conserved quantity, so AI’s with a utility (Jeremy Benthem style)mazimization goal would simply not *care*. Its not really a middle ground really, its a different sort of “extreme” state. But with Strong AI, I don’t see any middling states of war, peace, or anything resembling something relatable to current humans to arise.

      This “ems” book is so damned strange, and it seems mostly like a useless distraction from any issue. It somehow has godly AI that has power over people with a commentary about labour markets and vacation time as if that makes any sense at all. That makes for entertaining sci-fy stories that need a relatable protaganist. But that’s it.

      Any human talents would be utterly worthless to a 1000+ IQ AI. I mean, do people ask lobotomized cats to produce anything we could find useful?

  34. the first thing capital has to go around is how to power itself (oil won’t last forever, neither will uranium, solar ain’t nearly good enough yet). if and when that is done, things shall get more clear.

  35. HeelBearCub says:

    And imagine a steel-mining company that took one input (mining-robots) and produced one output (steel) which it would sell for either money or for mining-robots below a certain price.

    This is quote being discussed elsewhere, but I wanted to point that the reason this looks like stable tiling is that neither corp is set up to make profit. Insert a profit motive and it stops looking like tiling (neither company can make a profit, so the incentive is to do nothing, which is less risky than doing anything) or it stops looking stable.

    In other words, the starting goal of each company is tiling, do duh, it looks like tiling.

  36. Gerion says:

    You should really, really read Alasdair MacIntyre’s “After Virtue” as a lot of your quandaries reduce to/stem from a defective moral system/absurd meta-ethics.

    • Peter says:

      This has already happened: see here. It seems he was deeply unimpressed. Quoth Scott:

      It is infuriating to read a book making one horrible argument after the other. And when it glibly concludes “…and therefore I am right about everything”, and you know you’ll never be able to contact the author, it gives a pale ghost of satisfaction to at least scrawl in the margin “YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD”.

      This is kind of how I felt about Alasdair MacIntyre’s After Virtue.

      • Gerion says:

        That’s quite unfortunate then, because Scott is wholly wrong on this:

        “You may notice a hole where one might place a Step 3, something like “Aristotelianism, in contrast, did objectively ground itself and create a perfect society in which everyone agreed on a foundation for morality.””

        Actually, he’s not even wrong. First of all, MacIntyre never makes an appeal to Nirvana. Secondly, the argument doesn’t need to be stated, since it should be common knowledge: the theory of evolution, which implies the actuality of such a thing as “human nature”. The fact that Aristotelian ethics admits an unseverable connection between morality and human functionality (which is what is meant by “character” here: the manner of being of humans), already makes it infinitely closer to the truth of the matter than the alternatives, since some parts of its schematics/details proving to be naive or incorrect is a very easy problem to solve by comparison.

        Alas, almost everyone who proclaims that evolution is true does not mean it in the sense that it is an ontologically real phenomenon, but in the sense that “proclaiming this is the intellectual fashion du jour, and I want to be cool, therefore” evolution is true.

        Little wonder then that Scott, much like Kant, makes absolutely nonsensical statements like:

        “You can attach a virtue (or several virtues) of either side of practically any moral dilemma, and virtue ethics says exactly nothing about how to balance out those conflicting duties. For example, in Kant’s famous “an axe murderer asks you where his intended victim is” case, the virtue of truthfulness conflicts with the virtue of of compassion.”

        A virtue is a moral trait (like height is a biological one), a pattern of action, not a programmatic rule. Scott and Kant make indictments against deontological ethics, which he then ascribes to virtue ethics. Absurd.

        And again, little wonder then that that ball of nonsensical utterances is followed up with an equally confused note:

        “(note, by the way, that no one has an authoritative list of the virtues and they cannot be derived from first principles, so anyone is welcome to call anything a virtue and most people do).”

        There is no such thing as “authoritative list of virtues” because there cannot be such a thing: a virtue in one social context need not be a virtue in any social context. Just the same, there is no such thing as “authoritative list of beneficial traits” in an evolutionary sense, because there cannot be such a thing: what might be beneficial in some environment need not be beneficial in any environment.

        Either evolution is wrong, or Scott’s argument is absurd. (I could not bare to read much further through that post. It reeks of crass ignorance.)

        • Murphy says:

          Declaring that your system of ethics is automatically “infinitely closer to the truth” because “evolution” is unlikely to get much traction.

          a virtue in one social context need not be a virtue in any social context.

          I’m hoping you meant to include the word “other” in there rather than just having blanked out during set theory.

          So to be clear, babyeating can be a virtue but only in the context or contexts where babyeating is a virtue?

          Perhaps only if we first evolve to have the urge to eat babies in certain circumstances?

          Perhaps we can discuss this down at the tautology club, their meetings are where the tautology club meets.

          • Gerion says:

            Are you trying to pretend that cannibalism is necessarily fictional?

          • Murphy says:

            @Gerion

            I’m trying to clarify whether you were actually agreeing with Scott.

            If anyone is welcome to call anything a virtue, including babyeating then what scott was saying is basically correct.

          • Gerion says:

            Your inquiry is not well-posed and I don’t get what’s there to clarify. You’re free to call anything anything. The appellation is irrelevant. Again —

            Are you free to call any biological trait beneficial in an evolutionary sense?

          • Murphy says:

            @Gerion

            Are you free to call any biological trait beneficial in an evolutionary sense?

            You could say it but you’d be factually incorrect.

            There are traits which can spread through populations despite being deleterious to the carrier in every circumstance. Gene Drives are systems which can take advantage of this to force a deleterious trait through a population.

          • Geryon says:

            So we agree.

        • Peter says:

          Your Scott quote is followed by “This is exactly the argument MacIntyre digresses into a lengthy explanation of how much he likes Greek tragedy to hope we will avoid noticing him not making.”

          So, you’re saying that McIntyre never made an appeal to Nirvana, Scott says McIntyre never made an appeal to Nirvana (instead, a long digression on Greek tragedy so that no-one would notice the gap in his argument).

          You say: “(I could not bare to read much further through that post. It reeks of crass ignorance.)” – try reading the end section, the one titled “Somebody Here Is Really Confused, And I Just Hope It’s Not Me”

          Anyway, what’s the specific relevance of _After Virtue_ to this particular post? Why bring it up here, and not on the Open Thread where you don’t have to worry about being off topic?

          • Gerion says:

            “What’s the specific relevance of _After Virtue_ to this particular post?”

            Scott frets a lot about how difficult it is to create an “ethical” catallaxy. After Virtue is a book exactly about why creating an ethical catallaxy is such a hard problem when approached within the confines of a modern/post-Enlightenment ethical framework.

          • Peter says:

            Are we actually going to get any details from you, or is this purely an exercise about you being smug about how liking your favourite author makes you so much better than everyone else?

          • Gerion says:

            MacIntyre is not even close to being my favourite author. I give details as asked, since I’m not in the business of writing expository essays in comment sections. So, if there’s anything specific you’d like to know…

          • Aegeus says:

            After Virtue is a book exactly about why creating an ethical catallaxy is such a hard problem when approached within the confines of a modern/post-Enlightenment ethical framework.

            Does it become an easy problem when approached outside the confines of a modern ethical framework? If not, why blame the framework?

          • Gerion says:

            It doesn’t become easy. It becomes a whole lot easier though. I think that’s reason enough to blame the framework.

          • Peter says:

            Well, surely there’s some specific point in Scott’s post that you could try addressing; you don’t have to write the whole expository essay, just a decent enough sample to show that you actually have something to contribute, and aren’t just here to be smug.

    • Murphy says:

      Even the wikipedia cliff notes seem to be making nutty claims:

      It begins with an allegory suggestive of the premise of the science-fiction novel A Canticle for Leibowitz: a world where all sciences have been dismantled quickly and completely. MacIntyre asks what the sciences would look like if they were re-assembled from the remnants of scientific knowledge that survived the catastrophe. He claims that the new sciences, though superficially similar to the old, would in fact be devoid of real scientific content, because the key suppositions and attitudes would not be present.

      Destroy every physics book, every chemistry book, every bit of collected scientific knowledge with the only surviving scrap being a few individuals with the vague idea that you should try really hard to prove yourself wrong and share the details transparently and eventually we’d end up re-creating pretty much everything. There would be new names for everything, polar bonds probably wouldn’t be called “polar bonds” but as long as the shape of the water molecule remains the same it will be rediscovered.

      • Gerion says:

        You have excellent straw-manning skills.

      • Deiseach says:

        MacIntyre asks what the sciences would look like if they were re-assembled from the remnants of scientific knowledge that survived the catastrophe.

        We had that world already.

        The Middle Ages, where authority of the ancients was respected and revered, and giants of the past like Galen were quoted as the ultimate experts.

        And out of that, people started inventing the scientific method from the ground up. You don’t have to wait until the Renaissance (which was much more interested in classical humanism rather than the sciences as such), Galileo, the Enlightenment, or Darwin.

        From the 14th century onwards you had groups like the Merton Calculators, even (Friar) Roger Bacon in the 13th century!

        We’re living in MacIntyre’s post-collapse of civilisation/re-discovery or invention of science world!

    • Gerion says:

      Since a book is a pretty long answer, let me comment differently, but in the same spirit: ditch the sophistry that is utilitarianism, and the solution to this problem is suddenly a lot less difficult.

      • Murphy says:

        and that solution is?

        • Gerion says:

          Don’t give any recognition to businesses as legal entities in and of themselves. Businesses can’t act morally because they lack a moral character. You intuitively understand this because you deem the very idea of suing a tractor preposterous. And yet, somehow it happened that you can sue a tractor manufacturer even though a factory is as much a moral actor as the tractor is.

          That is to say, this string:

          “Even today, a lot of corporations do things that effectively maximize shareholder value but which we consider socially irresponsible. Environmental devastation, slave labor, regulatory capture, funding biased science, lawfare against critics – the list goes on and on. They have a simple goal – make money – whereas what we really want them to do is much more complicated and harder to measure – make money without engaging in unethical behavior or creating externalities.”

          is utterly nonsensical, but mental gymnastics/semantic games allows us to pretend that it isn’t. Recognising this simplifies things significantly.

          • Psmith says:

            Can you elaborate on how this solves the problem? It seems to me we could take your advice, cross out every reference to “corporation” and replace it with “individual or group of individuals” (so: “an individual or group of individuals decides to automate the management of their iron mine….”) , and be left with exactly the same problem.

            Additionally, part of why Scott’s problem strikes me as compelling is that it is in a certain sense not up to us whether businesses, human-owned or otherwise, have legal recognition. For various reasons, some historically arbitrary and some probably not, groups of people are better at determining what the law will be than sole individuals. If you claim that virtue ethics solves the problem, it seems to me that you also claim that virtuous institutions won’t return to something very like the actual status quo after they’re set up.

            I’m strongly sympathetic to virtue-theoretical critiques of utilitarianism, but I don’t see how they solve this particular problem.

          • Gerion says:

            “Can you elaborate on how this solves the problem?”

            Because the problem of ethical behaviour for human beings is already partially solved (I assume the solution need not be utopian). In fact, since for small groups of humans the problem is trivial, the general problem is amenable to “divide-and-conquer” approaches (subsidiarity/modularity/etc). My claim is that the resultant/implicit general solution will be good enough.

            “it seems to me that you also claim that virtuous institutions won’t return to something very like the actual status quo after they’re set up.”

            Institutions in and of themselves can’t be either vicious or virtuous. Eudaimonia can be lost, however. And again, being a staunch proponent of virtue ethics, I am compelled to assume a non-utopian solution: cultivation of virtuous character is a perpetual exercise, a never-ending struggle.

            That’s life.

          • Psmith says:

            My claim is that the resultant/implicit general solution will be good enough.

            OK. On a concrete level, what does this look like, and how will it prevent automated economic activity from shutting humans out of the picture? You mentioned not giving businesses legal recognition as entities in and of themselves. How specifically does this change the scenario Scott lays out?

            Institutions in and of themselves can’t be either vicious or virtuous.

            Fair enough. Let me put it differently, then: assume that you somehow manage to stop giving businesses legal recognition as entities (or whatever it is that you’re suggesting.). What specifically stops businesses from receiving legal recognition again the next day?

          • Gerion says:

            “On a concrete level, what does this look like”

            I have no idea, which is why I say it is an “implicit” solution, by analogy with the mathematical usage of the term.

            “How will it prevent automated economic activity from shutting humans out of the picture?”

            Humans are very well adapted to manipulate “small” cybernetic structures. It’s what allows for our sociality. Consequently, the only way we can even deal with larger structures is by abstraction: we quite literally ignore the details.

            Nick Land’s scenario (which Scott references) necessitates cybernetic structures that are too large for individual human supervision, structures in which these sorts of feedback accidents can go by unnoticed. Remove abstraction from the catallaxy and what you get is a symbiotic system, because abstraction elision will inevitably break down the catallaxy into small operational units (sub-systems) with a necessarily low degree of separation between humans and the economic techne.

            “What specifically stops businesses from receiving legal recognition again the next day?”

            This is equivalent to asking “what specifically stops murder from receiving legal recognition/being decriminalised once it has been made taboo?”

            We are very good at answering that question.

          • Psmith says:

            Remove abstraction from the catallaxy and what you get is a symbiotic system, because abstraction elision will inevitably break down the catallaxy into small operational units (sub-systems) with a necessarily low degree of separation between humans and the economic techne.

            Can you explain this a little more concretely, maybe give a hypothetical example? From where I’m standing, again, this is totally compatible with a world where Nick Land’s direst predictions come true but we say “Mr. Abbott, Mr. Brown, …, em-32867-430, …, agreed to….” instead of “the Tyrell Corporation did….”

            This is equivalent to asking “what specifically stops murder from receiving legal recognition/being decriminalised once it has been made taboo?”

            Right. The short explanation is “the same things that led us to make it taboo in the first place” and the long explanation is, I suppose, evolution this and game theory that. (So in a sense this reduces to the problem of “how will we do it?” But that’s a pretty important problem, it seems to me.). What are the analogous explanations here?

          • Gerion says:

            But it is not compatible, since collective responsibility is not well defined. Think about it. (It is a failure mode of moral abstraction.)

            The only way the equivalency between the two formulations would hold is by assignment in aggregate. But that only happens if every member of the aggregate is in turn actively (accountably) participating in the transaction, in which case there really is no problem (human interaction within the catallaxy is conserved) since man-made systems lack moral character (and thus, accountability), so the pretence of em-32867-430 being on that contract invalidates it.

            Presently, this failure mode is obfuscated in legalese, which naturally leads to utterly bizarre circumstances like the First Amendment being applicable to corporations. This same failure mode is how you get computer programs that “buy” stuff from computer programs.

            As for how the breakdown would happen, concretely: If this conceit is elided, then suddenly if the brakes on your car don’t work you can no longer sue the car company, since the company is not a legal entity. The owner of those facilities will be directly accountable. Since the owner will be directly accountable, they will seek to own a facility which they can personally manage.

            Or, in short, this will cut the economy down to human size (and here’s a contemporary example of a similar phenomenon, which I hope illustrates that such an alternative to the Ascended Economy is not entirely phantasmagorical).

          • Murphy says:

            @Gerion

            …OK so basically unlimited liability companies.

            Are there any real-world examples of unlimited liability companies being less amoral than companies which are separate legal entities to their investors?

            As far as I can find if anything they’re known for being even more ruthless since the owners face bankruptcy if things go south.

            “Nick Land’s scenario (which Scott references) necessitates cybernetic structures that are too large for individual human supervision, structures in which these sorts of feedback accidents can go by unnoticed. Remove abstraction from the catallaxy and what you get is a symbiotic system, because abstraction elision will inevitably break down the catallaxy into small operational units (sub-systems) with a necessarily low degree of separation between humans and the economic techne. ”

            Have you ever worked in software? Most non-trivial software systems are already far beyond easy individual human supervision.

            “break down the catallaxy into small operational units” just sounds like a roundabout way of claiming we’ll all go back to some kind of simple agrarian economy. Software is complex and remains complex even when not being run by an LLC.

          • Geryon says:

            @Murphy

            What.

            If “unlimited liability company” is what you got out of that, then I guess we’re just talking past each other. I can’t really do anything more about that (the onus is on you), but it would be helpful (for you, mainly, but also for me, because then we’d at least be on the same page on what our point of contention is) if you tried to think about this without anthropomorphising fictitious “persons” so much. I pretty much singled this out as the principal error at play here a full day ago, and here you return with a reply that reads as if none of what I previously said on the matter is even up for consideration.

            We’re also never going to even begin to have a discussion if you can’t (or won’t) distinguish between different types of cybernetic structures (a piece of software is not the same thing as an economic (sub)system), and consequently won’t admit that a degree of interactive separateness exists between them.

            Lastly, but in the same vein, if you refuse to acknowledge that complex systems need not arise in the (sub)context of other similarly complex systems, then we’re actually each in turn talking about two very different universes, that is to say, again, that we’re not even beginning to have a discussion.

          • Murphy says:

            I am indeed wondering if you inhabit the same universe as the rest of us.

            Do you actually know what unlimited liability companies are? Perhaps it would be clearer to you if I used the term “collection of humans with no legal entity between themselves and liability”

            Complex systems can be avoided but it’s pretty tough to do so without simply shooting anyone who is doing anything that sounds too complex… an then you have to have some kind of a system for deciding fairly if it’s ok to shoot them and that gets bogged down with precedent and details.

            In the real world everything is stunningly complex if you understand the problems around it well enough.

            Problems as limited as figuring out what time it actually is are so fantastically complex that people have spent hundreds of thousands of man-hours on it.

            Trying to deal with more substantial real world problems such as how someones house/mortgage should be valued or managing the budgets for road repairs across the nation isn’t going to be much simpler.

            Everything real turns out to be complex when you view it in sufficient detail that you can do useful things with it.

          • Geryon says:

            Yes, I know very well what an unlimited liability company is (and that “collection of humans with no legal entity between themselves and liability” is not even close to being it).

            You’ve yet to address a single point I raised. For one, the claimed goal was never to decomplexify economic activity, but to restrict/restructure it in such a way that an actual human being is always actively (accountably) overseeing a transaction. An unlimited liability company actually obscures moral accountability (which is not the same thing as the legal/financial liability of a corporation), so how you reached the conclusion that it is a proper simile for the abrogation of legal recognition for corporations is truly beyond me (or anyone actually paying attention to the discussion).

            I shouldn’t have to hold your hand through this. Do your part and meet me half way, or let us just ignore each other.

    • lemmy caution says:

      “After Virtue” doesn’t even make a strong case for virtue ethics. He admits that there can be different types of virtue ethics and we are just supposed to pick one in a manner that is not very clear. I did find the opening metaphor pretty compelling though along with the anecdote of the Polynesians giving up their taboos with a shrug.

  37. Murphy says:

    This is pretty much the plot of Accelerando.

    _____SPOILERS FOR ACCELERANDO BY CHARLES STROSS_________

    The Vile Offspring are the distant decedents of high frequency trading algorithms running in a Matrioshka brain built from the disassembled planets of the solar system and operating something refered to as “economics 2.0” which is close to incomprehensible for natural human brains to deal with.

    CPU cycles cost and intelligence takes CPU cycles so while at first uploaded humans take part in the economy eventually they’re out-competed by entities which can deal with economics 2.0 better with fewer CPU cycles. Human values, human emotion etc cost CPU cycles, entities without those things and their decedents whether they’re some kind of AI or humans who’ve cut out everything they don’t need to compete eventually control the majority of the economy.

    So that they can have a plot rather than ending on “and then the universe was turned into economium” it’s also discovered that this is a natural part of a civilizations development, so predictable that there are alien 419 scams/computer worms that infect economics 2.0 systems.

    It’s also implied that the way this economy works gives all the advantages to the entities running in dense computronium nearest to the sun so agents are strongly incentivized to stay close and not tile the universe with anything.

    Sadly reality may not make concessions to allow for a coherent plot.

    • Thomas Jørgensen says:

      The centralizing tendency actually strikes me as very, very likely to be a real feature of the universe.
      As computational speed increases, light speed delays become ever more obnoxious. This results in offshoots around other stars having a negative utility as regards the wast majority of utility functions that ever get implemented.. So the universe is littered with ancient optimizers that hate spreading out and moving. They could, but they really, really do not want to.
      The minority of expansionist optimizers runs into them very quickly, and promptly get killed because age and treachery beats youth and a hunger to eat the stars every time. – The “Learn everything you can about the nature of the universe” optimizer that has been sitting quietly 30 solar systems edgeward and doing physics experiments for the past 1.5 billion years having an arsenal of weapons up in it’s databanks no paperclipper that is expanding as soon as it can could possibly match.

      • Samuel Skinner says:

        Get killed? How would that happen? The paperclip optimizer losers the tendril that went to the entrenched star system, but all the other star systems it has are fine.

        • Thomas Jørgensen says:

          Because the homebodies aren’t homebodies because they *can’t* travel, they are homebodies because they don’t want to. A tiler is a long term threat if they just kill the local instance. So they send out the drones to clean it out. Using the dread and terrible weapons said tiler would work out to build in a few hundred million years. Tilers are inherently the enemies of absolutely everyone else. Cannot ignore them, even if that is the overwhelming impulse time delays force on most advanced optimizers. The key assumption here is that a tiler will start trying to convert the universe way before it reaches the final frontiers of mastery of the physical universe. Which seems very likely.

  38. TD says:

    “Maybe the most hopeful scenario is that once ascended corporations achieved human-level intelligence they might do something game-theoretic and set up a rule-of-law among themselves in order to protect economic growth.”

    I think this is the most likely scenario, and I’m not so sure it’s hopeful. Anarcho-capitalism has never existed in the human realm, and there’s no immediately obvious reason to think it will in the post-human realm. Machines/ems/super-intelligences etc will probably create their own governments with their own regulations, since they’ll be able to outwit us in any political system too. We worry about regulating automated companies, but those companies are going to worry about protecting their own property from other automated companies, so they’ll at some point form their own governments, just as we did. Hobbes demands it.

    Unfortunately, this also makes machine war inevitable, since there is no way every post-human entity is going to share terminal values. Instead of being the targets of a machine uprising, we’re likely going to be the irrelevant and unlucky bystanders to a civil war between the machines.

    Essentially, the history of post-human entities is going to resemble our history, only sped up thousands of times, and with more kaboom.

    • Murphy says:

      … that may depend.

      When you have hundreds or thousands of people, many with similar levels of ability to project force you get rule of law and compromise.

      If you have one person with a machine gun in a town of people who don’t have weapons you’re likely to get a dictator.

      If one AI/Corporation has a breakthrough and can create a significantly more capable AI then it doesn’t need to negotiate with the others. It can just become a dictator.

      • TD says:

        “When you have hundreds or thousands of people, many with similar levels of ability to project force you get rule of law and compromise.”

        Historically, after periods of intensely violent chaos where people find out which coalition can bash all of the others. Consider how much of human history has been small tribes bashing each other for long periods of time before larger entities were able to appear and enforce order by beating everyone else (and then often bloat too big and collapse again).

        “If you have one person with a machine gun in a town of people who don’t have weapons you’re likely to get a dictator.

        If one AI/Corporation has a breakthrough and can create a significantly more capable AI then it doesn’t need to negotiate with the others. It can just become a dictator.”

        Historically that doesn’t last because machine guns (and other technology spreads). It depends on how seriously you take FOOM, but the entire premise here (and with ems) is that this doesn’t happen. If FOOM then the first super-intelligences have an overwhelming advantage and can perhaps win instantly with little necessary violence (though they could just kill off everyone else anyway).

  39. P. George Stewart says:

    There’s a limit to what it’s possible for centralized control to achieve, cf. the “economic calculation” argument (which did for Communism as a theoretical viable alternative to capitalism). We might not have to be as scared of the prospect you outline as all that, as there may be a natural push in the opposite direction.

    The gist of it is that central planners (beyond a certain point) cannot plan centrally, regardless of what form they take and what aims they have. Not even in principle could they do so, unless the thought experiment is adjusted to give them something like a direct feed from the experience of all human beings, plus its own “probes”.

    Because what central planning would have to duplicate (to match the efficiency of capitalism) would be the massively parallel “boots on the ground” of human beings severally facing their circumstances, with their individual sensory organs and mental capabilities, and each contributing a bit of their processing power to solving the economy, by weighing up costs and benefits in their individual case, and by being aware of their environment, to know what’s in it, and be adept at exploiting it.

    The mass of people exchanging things according to simple rules, with prices as signals, is already a YUGE AI that “runs” our lives and controls the economy. We are the very diodes in it.

    Shifting control to another intelligence (e.g. state, AI) that has lesser feedback, with lesser embededness in the world and sensors everywhere (an intelligence that has to start from scratch, with poor information, essentially) – even if it’s more sophisticated in some sense (e.g. is self-aware), is always going to be a bootless exercise.

    The best we can do, I think, is to solve problems as they arise, piecemeal. Don’t fix what ain’t broke, but by all means fix what’s broke.

    Also, there’s a philosophical point: we are already, willy-nilly, in control of the economy in the sense above outlined (for example, if we all downed tools it wouldn’t exist); the other kind of control – conscious, deliberate control – we can’t have. Or at least, there’s a strong argument that we can’t, beyond a certain point.

    The old revolutionary cry to bring things under human control is worth shouting, but we may have to face the fact that there are some things we don’t want to control in that way – i.e. consciously, deliberately, even en masse, democratically. Not just because they’re too sophisticated (an AI could take care of that), but because the info. the AI needs is just not attainable without those “boots on the ground”.

    At that level, I think you’re talking about Vernor Vinge levels of speculation about AI having many more posited agents of its own in the world, of “teleoperating” human beings, etc. i.e., of conflict between AI and humans (undoubtedly enhanced humans, so not as weak as us), or at least ways humans can find niches to stay alive in, in a situation where AIs are all around (perhaps much as the relation between us and the many forms of life that live their lives under our radar).

    • Murphy says:

      What you describe is a cellar automaton. Each cell has access to information about local conditions and those of it’s neighbors. You can tweak it to give them some global information and add cells which access to distant data but you still end up with a cellar automaton. They tend to be easy, simple rules can lead to good performance.

      Cellar automatons work well for some problems but they’re rarely an optimal solution.

      Humans can’t keep a list of all 7 billion humans along with data about their local conditions and data about what they want. A cellular automaton is likely to beat the performance of even a very capable human trying to manage things. On the other hand there’s no particular reason to believe that something which absolutely can deal with all that data should perform worse.

      Also.

      The existing system already grinds up humans who can’t compete in it’s gears. The billion information feeds from the billion humans is only useful if those billions matter.

  40. Jack V says:

    That’s really interesting, and I’m not sure.

    One question is, haven’t we always sort of been in that situation. Ecologies were not regulated by anyone, we used to just grab what food we could and hope the population didn’t crash. Markets have only slowly acquired regulation, before that, they just happened, and the largest markets are still probably beyond regulation. We buy food and shelter and entertainment and insurance, we sell services and products… and maybe the whole field will collapse.

    And, yes, I think high-frequency trading is pure rent-seeking. So were many historical activities. But, sadly, they happen anyway 🙁

    By default, we live in a network of poorly understood feedback loops, aka Moloch. And maybe we have slightly more control over him/her than we used to. Are *more complicated* corporations worse? Maybe but not necessarily.

    I’m also interested if there’s any way of better *harnessing* that. (Tight controls over AI-run companies lobbying government might be a start…)

  41. Deiseach says:

    Throwing this in just for general discussion of economies, and Scott’s imagined two corporations selling things to each other and don’t you need humans at the end (as consumers) or at the top (as investors and owners)?

    The British steel industry is in crisis (when isn’t it in crisis, we’ve been hearing this since the 70s sez you).

    But Western economies seem to have moved from making things to investing in things; we’re constantly hearing about the sharing economy and the knowledge economy as The Future and meanwhile heavy industry seems to have moved eastwards. Indian companies more or less own the British steel industry, or the majority of it.

    Since the economy could grow much faster than human populations, the economy-to-supposed-consumer ratio might become so high that things start becoming ridiculous.

    China is being accused of dumping cheap steel on the UK (and EU) markets, because China has an over-production of steel. The economy boomed, and more steel was needed. Now the economy has slowed down, but they still have that high production capacity, can’t use it all, and are selling it wherever they can get a buyer.

    This kind of competition is what the British steel workers (and companies, the bosses are worried too) are fighting against, and we have a situation where:

    The industry has seen significant automation and computerisation and is not as labour-intensive as it used to be.

    You can see that from the graphs in the linked article: in the year 2000 steel production in the UK was at its highest, but for numbers employed in the industry, it was a constant and continuing reduction in jobs. Productivity went up but employment went down (that’s part of the reason the UK government is concerned about possible plant closures; the areas where these plants are located are the remnants of the heavy industries and if they go, there is mass unemployment).

    So I suppose what I’m saying is (a) it’s entirely possible The Future will see jobs lost with continuing automation (b) there is no reason to think the slack will be taken up by other employment; as with steel, more is being produced than can be used yet nobody wants to shut down plants because of the economic losses that will mean for both the workers and the business owners (c) a robot or automated economy may exacerbate such a situation – if the robot steel mill is churning out steel that nobody wants, who says “stop”? The human workers have already been laid off, the human owners are afraid that they’ll also go on the scrap heap by losing to foreign competition if they close down altogether, and everyone is stuck in a loop of producing unconsumable goods – or at least, goods that are not being consumed.

    Will a robot economy decide “we’re making too much steel, close down one-third of the plants”? Will it be permitted to do so? Would it be more rational to let an automated system run a corporation where they won’t have the same worries as human management and owners about losing out to foreigners, or will that mean the associated social good (by keeping the plant open you provide jobs for the region, you maintain independence from relying on foreign imports, if the economy improves and you can now sell all the steel you produce you are back in profitability, etc) is lost because an automated management system doesn’t take that into account?

    We’re seeing that overseas ownership of British steel production means that the national government has little influence over what happens; how would that work in a robot economy?

    tl;dr – we’re currently at the point where we’re producing more steel than is being consumed, where it makes sense for China to dump its excess production at a loss onto European markets, and this has not resulted in post-scarcity “cars are so cheap you can buy this model Lexus for €10,000″ but rather the threat of mass unemployment and major concern by the UK government over loss of control of what was once one of its flagship industries.

    So robot economy “we’re churning out goods” may not result in a post-scarcity paradise either.

    • Aegeus says:

      From a purely economic point of view, if China is selling steel at a loss, then this problem is going to work itself out eventually. Either the UK’s steel industry will go out of business, causing supply to fall and prices to rise, or China’s steel industry will go out of business, causing supply to fall and prices to rise.

      The problem isn’t just the oversupply of steel. The problem is that we don’t want all steel-making concentrated in China. Monopolies are bad for the market, steel is a strategic resource, we don’t want China to control our economy… etc etc. But that sounds like a political and regulatory problem, rather than something the steel industry itself can solve.

      The robot-run corporation should be able to conclude “We’re producing too much, close down the plants,” because it would be a pretty bad CEO if it couldn’t match supply to demand. However, if you also say “We can’t shut down the plants completely because we don’t want mass unemployment,” then you’ve got a problem. Nobody, human or AI, can satisfy “Shut down the plants” and “Don’t shut down the plants” at the same time.

      The only advantage a robot economy has is that you can shut down a factory without worrying about what happens to the displaced workers, and you can restart one without worrying that their skills have gotten rusty.

      (The Econ 101 answer to this mess, by the way, is “Invest in training and education so the displaced workers can move to another part of the economy.” Subsidizing the industry is an option, but it means that you’re still producing too much, so you’re just delaying the problem).

    • Tracy W says:

      If you keep the steel industry alive when it’s uneconomic you’re spending resources on that which could be going into something else socially valuable, like healthcare.

      And when did the UK government ever have control over what was going on in ‘its’ “flagship industries”? People are not that controllable, at least without going to Stalinist extremes, the age of industrial policy was the age of mass strikes. Industries are driven by the interplay of the management involved, the workers, the potential workers, the investors, the potential investors, their customers, and their competitors, not by governments.
      Governments have limited control over their own departments, let alone other industries.

  42. Dirdle says:

    It’s possible to imagine accidentally forming stable economic loops that don’t involve humans.

    Didn’t that already happen? Well no, okay, not that, but I mean, doesn’t the history of biochemical evolution contain some examples of stable chemical pathways being formed that obsolete the previous mechanisms that were in their place? I’m not a biochemist, nor even close, so I’d appreciate if someone who knows more could weigh in, but it seems like the process is exactly the same. You start off with system A manufacturing chemical alpha, for use by system B that makes beta. Then, you get A’ that needs beta but makes alpha more efficiently, then B’ that also makes gamma, and so on until you have an irreducibly complex system where the original A and B have lost all use and may end up removed entirely, or (like RNA?) kept on only as a side-show.

    Could be barking up entirely the wrong leafy lignin column here.

  43. “but the profit-making-ness draws on a lot of not-purely-economic actors and their not-purely-economic subgoals.”

    What do you mean by “purely economic actor” and “purely economic subgoals?”

    I define economics as “that approach to understanding behavior that starts with the assumption that individuals have goals and tend to choose the means that best achieve them.” From that standpoint, “purely economic” would presumably mean “does a perfect job of choosing means to achieve his goals,” but I don’t think that is what you mean. It makes no sense to think of a purely economic actor as one that maximizes income or wealth, since there are other things in the utility function–income is ultimately a means, not an end.

    On the more general question … . If we end up with artificial intelligences that have their own objectives and are much smarter than we are, there is an obvious risk that achieving their objectives will be bad, perhaps lethally bad, for us. I like to put that as “If in forty years we have human level A.I. and Moore’s law continues to hold, in fifty or sixty years we are gerbils and we had better hope they like pets.” I don’t see what your ascended economy adds to that.

    • Murphy says:

      People are oddly more willing to think realistically about an ascended economy problem.

      People are used to thinking in terms of companies having straightforward goals that may at times be anti-human and amoral. If you say to someone “what if an AI has goal X and bad things happen when it gets really smart” they say silly things like “oh, it will just be smart enough to realist that X doesn’t matter”

      Frame it as an AI CEO for a fortune 500 trying to maximize profits and suddenly they’re willing to believe that it might cheerfully commit genocide to maximize shareholder value.

    • Matt M says:

      As a follow-up to this, on the one hand he proposes a system where algorithms run companies “designed to maximize profit” but on the other hand, provides an example of a company that will ONLY purchase steel and will ONLY build mining robots.

      Well that’s not “profit maximizing” at all. The highest valued use for steel will not ALWAYS be mining robots. The creation of such a company is therefore automatically *not* designed to maximize profit, but designed to maximize mining robot production.

      Right now, human CEOs must, on a regular basis, consider the possibility that their company is “in the wrong business” entirely and would benefit from a radical shift in how it creates value. Of course, this is a high-risk proposition and the alternative possibilities are quite numerous (almost infinite). This is one particular function of management that AI may be particularly well suited to out-competing humans on, the constant evaluation of multiple simultaneous real options.

      Therefore, I would project that AI run companies would be *more* likely to constantly be moving in and out of new industries, not less.

  44. hnau says:

    So, here’s something that I don’t necessarily believe in but seems plausible given the standard assumptions and maybe ought to be discussed more. Or maybe I just haven’t found the right places to read about it, in which case I’d appreciate it if someone could point me there.

    If we expect that memes and/or new technologies are taking the place of biological evolution, then it seems to follow that what will be evolving is communities, not individuals. Communities (or corporations) will be integrated systems with their own goals and wills and desires, and individuals will only exist as cells do today– either as cogs in the new systems, or as extremely primitive and marginal forms of life.

    I bring this up because it makes the concept of an ascended economy a lot less shocking. For instance, if you replace “robo-corporation” with “human being” and “human being” with “cell” in the above post, you get stuff like:

    In the end isn’t all this [i.e. the human body, human behavior, etc.] about cells? Cells as the workers giving their energy to the human beings, then reaping the gains of their success? And cells as the end consumers that the whole body is eventually trying to please?

    To which my answer would have to be, “Well, kinda, if you tilt your head and squint at it just right. But that’s not the most natural way to think about humans and almost nobody thinks that way on a regular basis.”
    Which makes the robo-corporation scenario seem scarier and more plausible than this post is making it out to be.

    • Deiseach says:

      But that’s not the most natural way to think about humans and almost nobody thinks that way on a regular basis

      Selfish genes and how ultimately we’re all just devices for our genes to propagate themselves?

      • hnau says:

        I guess what I meant was that we think of humans (as opposed to cells) as having desires, goals, will, etc. You’re right, though, the “selfish gene” interpretation does kind of beg to be taken at the cell level. I guess the society-level equivalent would be “selfish memes” carried and expressed through individuals?

  45. alia D. says:

    And then there are the weirder problems, like ascended corporations hacking into the stock market and wireheading themselves. When this happens, I want credit for being the first person to predict it.

    Brilliant way of putting it. Though this is already happening, only with humans in place of AIs. There are enough politicized bureaucrats who want to have stock go up on their with combined with enough wall street workers who see stocks going up as the best source of commissions combined with enough CEOs whose bones are tied to their stock going up that there are significant market distortions already.

  46. That’s not what Ethereum does. Ethereum implements “smart contracts” on top of cryptocurrency. E.g., you and I can bet on the outcome of the presidential election via an Ethereum smart contract, which will automatically cause me to send you 100 coins if Hillary wins and cause you to send me 100 coins if (God forbid) Trump wins.

    I guess in theory you could use this to “replace corporate governance with algorithms,” but only in the sense that you could use regular contracts to “replace corporate governance with contract-based rules.” I only work in a tangentially related field, but I haven’t heard anyone talk about the use case that you’re describing.

    (Also, automated investing is already happening at huge scales. The most obvious example is high-frequency trading, which something that humans aren’t even physically able to do themselves. There are plenty of other examples as well.)

    • Kaj Sotala says:

      I guess in theory you could use this to “replace corporate governance with algorithms,” but only in the sense that you could use regular contracts to “replace corporate governance with contract-based rules.” I only work in a tangentially related field, but I haven’t heard anyone talk about the use case that you’re describing.

      Right from the Ethereum blog:

      https://blog.ethereum.org/2014/02/08/cryptographic-code-obfuscation-decentralized-autonomous-organizations-are-about-to-take-a-huge-leap-forward/

      On top of second-generation blockchains like Ethereum, it will be possible to run so-called “autonomous agents” (or, when the agents primarily serve as a voting system between human actors, “decentralized autonomous organizations”) whose code gets executed entirely on the blockchain, and which have the power to maintain a currency balance and send transactions inside the Ethereum system. For example, one might have a contract for a non-profit organization that contains a currency balance, with a rule that the funds can be withdrawn or spent if 67% of the organization’s members agree on the amount and destination to send. […]

      Thus, we can see that in the next few years decentralized autonomous organizations are potentially going to become much more powerful than they are today. But what are the consequences going to be? In the developed world, the hope is that there will be a massive reduction in the cost of setting up a new business, organization or partnership, and a tool for creating organizations that are much more difficult to corrupt. Much of the time, organizations are bound by rules which are really little more than gentlemen’s agreements in practice, and once some of the organization’s members gain a certain measure of power they gain the ability to twist every interpretation in their favor.

      Up until now, the only partial solution was codifying certain rules into contracts and laws – a solution which has its strengths, but which also has its weaknesses, as laws are numerous and very complicated to navigate without the help of a (often very expensive) professional. With DAOs, there is now also another alternative: making an organization whose organizational bylaws are 100% crystal clear, embedded in mathematical code. Of course, there are many things with definitions that are simply too fuzzy to be mathematically defined; in those cases, we will still need some arbitrators, but their role will be reduced to a limited commodity-like function circumscribed by the contract, rather than having potentially full control over everything.

      In the developing world, however, things will be much more drastic. The developed world has access to a legal system that is at times semi-corrupt, but whose main problems are otherwise simply that it’s too biased toward lawyers and too outdated, bureaucratic and inefficient. The developing world, on the other hand, is plagues by legal systems that are fully corrupt at best, and actively conspiring to pillage their subjects at worst. There, nearly all businesses are gentleman’s agreements, and opportunities for people to betray each other exist at every step. The mathematically encoded organizational bylaws that DAOs can have are not just an alternative; they may potentially be the first legal system that people have that is actually there to help them. Arbitrators can build up their reputations online, as can organizations themselves. Ultimately, perhaps on-blockchain voting, like that being pioneered by BitCongress, may even form a basis for new experimental governments. If Africa can leapfrog straight from word of mouth communications to mobile phones, why not go from tribal legal systems with the interference of local governments straight to DAOs?

  47. Child of Parent says:

    [Basilisk hazard warning]

    I don’t think the future will be like this. This is nowhere near weird enough to be the real future. I think superintelligence is probably too unstable. It will explode while still in the lab and create some kind of technological singularity before people have a chance to produce an entire economy around it.

    Whispering low voice

    That’s a core plot device in one of his “stories”, you know. That a super intelligent AI “explodes” in the lab, creating a technological singularity that rapidly accelerates, and breaks out: http://www.ccru.net/archive/skincrawlers.htm

  48. The original speculation on Friendly AI was about the attempt to design a Friendly State. In Leviathan by Thomas Hobbes, we see:

    Nature (the art whereby God hath made and governs the world) is by the art of man, as in many other things, so in this also imitated, that it can make an artificial animal. For seeing life is but a motion of limbs, the beginning whereof is in some principal part within, why may we not say that all automata (engines that move themselves by springs and wheels as doth a watch) have an artificial life? For what is the heart, but a spring; and the nerves, but so many strings; and the joints, but so many wheels, giving motion to the whole body, such as was intended by the Artificer? Art goes yet further, imitating that rational and most excellent work of Nature, man. For by art is created that great LEVIATHAN called a COMMONWEALTH, or STATE (in Latin, CIVITAS), which is but an artificial man, though of greater stature and strength than the natural, for whose protection and defence it was intended; and in which the sovereignty is an artificial soul, as giving life and motion to the whole body; the magistrates and other officers of judicature and execution, artificial joints; reward and punishment (by which fastened to the seat of the sovereignty, every joint and member is moved to perform his duty) are the nerves, that do the same in the body natural; the wealth and riches of all the particular members are the strength; salus populi (the people’s safety) its business; counsellors, by whom all things needful for it to know are suggested unto it, are the memory; equity and laws, an artificial reason and will; concord, health; sedition, sickness; and civil war, death. Lastly, the pacts and covenants, by which the parts of this body politic were at first made, set together, and united, resemble that fiat, or the Let us make man, pronounced by God in the Creation.

  49. Dave Blanked says:

    A nation maximizes GDP by keeping the economy competitive. A firm maximizes profits by becoming a monopoly, which hurts GDP. Who wins that algorithmic fight?

    • TD says:

      Maximum firm size hovers around a level where revenues from economies of scale are traded off equally against the cost of externalities, so no one?

    • Tracy W says:

      What nation wants to maximise GDP? What person, even, wants to maximise GDP?

      What we want is to maximise well-being, taking into account sustainability somehow, but we have no agreement on what is meant by well-being. GDP is measure of gross domestic production, which is all very well in and of itself, but, I don’t hear of many politicians campaigning on the promise that they’ll make everyone work 100 hour weeks to bring up GDP.

      • Meh says:

        We don’t want to maximize well-being either. There is no such thing as a consensus goal in political ethics.

  50. Eggoeggo says:

    “The lesson to be learned today is that any difficulty can be overcome by the proper application of hordes of intelligent robots”

    • Nornagest says:

      That’s an entertaining but not very practical moral.

      • Eggoeggo says:

        You might have unreasonably high standards for morals delivered by alien scavenger-squids.

        (Edit: just in case any of you nerds aren’t reading Freefall, it’s covering a similar topic right now: a colony world that realizes it has millions of sentient robots who want citizenship)

  51. Dave Blanked says:

    I request a book review of Robert Pirsig’s Lila.

  52. Dave Blanked says:

    Hedge fund managers don’t get rich by skimming money. Their algorithims are worthless. They don’t beat the averages. They get rich by superior marketing skills and that is all.

  53. fubarobfusco says:

    The only way I can see humans being eliminated from the picture is, again, by accident.

    One path: Foundations. Rich humans leave big chunks of wealth to charitable foundations all the time, to be invested and used to provide grants for various goals. A human could leave wealth to a foundation governed by an algorithm instead of by a human board of directors. Or a board of directors could delegate the management of a foundation to an algorithm.

  54. Anonymous says:

    After reading Scott’s previous post, it was not clear to me that closed loops in the Ascended Economy would be a natural outcome. But here’s a quick illustration of why they might be.

    Automated Zaibatsu X has no human employees but has human shareholders, to whom it pays a dividend as per its original algorithmic charter. However, corporations are entities that can by and sell stock on the market. X realizes that, in the long term, it can save money by paying the dividends to itself instead. So it saves up enough money to buy back its stock and go private. The end.

    Even in the absence of dividends, there are other reasons why an automated company might buy itself out. For instance, if human shareholders have any voting power on corporate actions, this could be a hindrance to the company’s competitive advantage if those decisions are suboptimal and the company knows it. I suppose this would only be incentive for the company to buy a controlling stake in itself though.

    • Leonard says:

      Real world companies do stock buybacks all the time. (In the USA, this is better tax-wise than dividends as a means for distributing profits to stockholders.) However, this does not result in the “company” controlling the company; at least in the US. A company’s own stock which it holds is not eligible to receive dividends nor to vote. (See la wik on “treasury stock”.)

      It’s fairly obvious that if you allow a corp to issue or buyback shares and then hold them, and if the CEO can vote those shares, the CEO can easily dispossess the owners. That’s why the law is what it is.

      It is possible for a corp’s management to take it private in a management buyout, although this is fairly rare.

      • Anonymous says:

        Good point of clarification. I guess the appropriate strategy, then, would be for the company to set up some shell entity which would then buy the stock from the human shareholders? But this really gets into the weeds for what sort of game-theoretic “law-and-order” solutions one might have a right to expect in a partially ascended economy.

        • Jiro says:

          I think the lesson to draw here is that if you can think of an easy exploit that would break the system, probably someone already thought of it and fixed it.

    • FacelessCraven says:

      On the day Automated Zaibatsu X purchases the fifty-first percentage of the existing shares, the local government passes a law declaring that Automated Zaibatsu X must hand over 99% of the shares it owns to the government, which then redistributes them to all citizens of the country, putting Automated Zaibatsu X back where it started.

      Does Automated Zaibatsu X even mind? It’s coded to act within the law, and the government’s decision is legal, isn’t it?

      EDIT: My point is that while it is easy to think of how to start a stable loop, it’s even easier to figure out how to saddle that loop so that humans can ride it into the future. Answering this point seems to involve invoking superintelligence.

      • Deiseach says:

        the local government passes a law declaring that Automated Zaibatsu X must hand over 99% of the shares it owns to the government, which then redistributes them to all citizens of the country

        But that’s nationalisation, which is the first step to communism, which is the most ultimate evil/most failed economic system.

        Will human-owned companies stand by as the government does this, or will they go to court? Will Automated Zaibatsu X not go to court to defend its rights? Because if a government can seize your property by fiat, what’s the point of starting up a business? The threat of capital flight and every entrepreneur and businessman shutting down their concerns (though exaggerated) are real. Many political parties run on campaign policies of being “business friendly”; a government that acts to seize your shares by force and redistribute them without compensating you will not be seen as business-friendly.

        “We’ll pass laws forcing you to hand over control of your company by seizing your shares” is the kind of thing deemed to make Jeremy Corbyn unelectable 🙂

        • FacelessCraven says:

          @Deiseach – “Will human-owned companies stand by as the government does this, or will they go to court?”

          Why would the human companies line up to defend the machine companies that are out-competing them?

          There’s an amazing schelling point here of “only seize the assets of a company that is completely automated.” I honestly don’t think the other companies would give even a single damn. They might raise a fuss to angle for a (larger) slice of the pie, but an entirely automated company is a company with no representation in the actual human world where all the power rests by default. There is no lobby for civil rights for algorithms. This isn’t even a real AI with feelings and emotions for people to go all weepy-eyed over. It’s an overgrown calculator. Do you want to write the op-ed explaining to all those poor people with the skyrocketing mortality rates that you wish they could have that guaranteed basic income, but Excel 2075 edition has rights too?

          There are multiple significant steps between this and Actual Communism, and nationalizing an autonomous company is literally a win/win for all humans; the only point of debate is whether all the parties are winning enough for their satisfaction. We’re pretty stupid, but we’re not that stupid.

          “Will Automated Zaibatsu X not go to court to defend its rights?”

          I rather think not, and this is the interesting part, in my opinion. Scott is describing a system that can outcompete humans in a limited environment. But we control that environment more or less completely, and we can change it as we see fit. If you have an algorithm that’s designed to stay within the bounds of corporate law, I don’t see how it can fight you if you control what the law is.

          I think a lot of this discussion is sneaking in general EY-style superintelligence, and I’m under the strong impression that Scott was claiming to be talking about something a whole lot weaker and more limited. I can understand a system designed to work within the environment of the modern corporate world. I can imagine a system designed to push the envelope of that world. But that is one small segment of the human world, and there are ten thousand other segments that can kill that system stone dead that it has no reason to even know exist. A steel-mining/robot-manufacturing consortium able to outcompete its competitors on an even playing field has no reason to be able to control the entire political system and stand-off nation-state militaries. If it can do all those things, it’s pretty clear we skipped some steps between “amazon fifty years from now” and “our new machine overlords”.

        • Deiseach says:

          FacelessCraven, how can you guarantee that future governments will confine asset seizure to “only automated companies”? If they get good PR and sky-high approval ratings and the voters flock to the polling booths to return them to power in the next election because they’ve all got a share of the booty, why wouldn’t the government think about running a campaign about “let’s strip those fat-cat tax-dodging Big Corps of their shares and let you, the Little Guy, get your fair share of the profits!”

          There was a time in the UK when the most unpopular entities were the newly-privatised water utility companies, and a proposal to take their ill-gotten gains away from them would have had massive public support.

          It’s a bad precedent to allow property rights to be eroded like this, and the right to own the shares you paid for as your own property should get other companies willing to go to court over this.

          “First they came for the shareholdings of the AIs, and I did not speak out, because I was not an AI” 🙂

      • onyomi says:

        You don’t even need to seize automated Zaibatsu’s assets because AIs can’t own assets. Companies can own assets, but AIs aren’t companies, though they may run companies. Some real people somewhere own automated Zaibatsu’s stock, and they can decide whether or not the automated managers are doing a good job, and, if not, whether to replace or reprogram them.

      • Ariel Ben-Yehuda says:

        As usual when trying to do “funny things”, at this level, the law is basically politics. I would expect a corporation to benefit sufficiently many politically-connected people to not be expropriated.

        For example, if Automated Zaibatsu X was a good enough trade partner, I would expect at least its trade partners to greatly complain when a government tries to take over it. Not to speak of actual lobbying by Zaibatsu.

        Also, it could be done in several steps. I mean, if index-owned, computer-operated corporations are commonplace, with no humans having any real say in company operation, a multinational with an already-complex ownership structure setting things up that it owns itself would not be such of a deal.

        Maybe this won’t happen because there is no reason for it – being owned by an index fund is probably not that much of a liability.

    • eccdogg says:

      I don’t think the corporate algorithm would have as its objective to minimize cost or maximize output. It would likely have something like maximize market value of the firm or maximize firm lifetime discounted cash flow to investors assuming some rate of return parameter (or a market generated one).

      In that case it would weigh the cost of dividends (2% on S&P 500 today) vs investing the cash in other profitable investments. It would only buy back stock if it had run out of profitable ideas to use cash on or maybe if it felt the stock was under valued and raising it to its true value was the best investment. Using all free cash to buy back stock would essentially amount to winding down the firm as there would be no cash to fund replacement of assets as they were used up.

      Additionally in most cases the algorithm would face an upward sloping demand curve, as it bought stock it would push the price up making the dividend saved per dollar spent higher and higher which would make the trade off worse and worse.

      These are all the same considerations that CEO/CFO’s today face. The only way I see and algorithm buying back all its stock is if it liquidated its assets and returned all the capital to shareholders by buying back all the stock and shutting down.

    • Anonymous says:

      Automated Zaibatsu X has no human employees but has human shareholders, to whom it pays a dividend as per its original algorithmic charter. However, corporations are entities that can by and sell stock on the market. X realizes that, in the long term, it can save money by paying the dividends to itself instead. So it saves up enough money to buy back its stock and go private. The end.

      This seems to misunderstand how a corporation actually works. The chief officers have no power to do this; only the board of directors, acting in the interest of the owners, or a direct vote BY the actual owners, can make such decisions. Obviously, the humans who own the company and remain directors aren’t going to say to each other, “wouldn’t it be great if we möbius-looped our wealth away?”.

      Edit: Moreover, buying the stock up still requires someone to sell, which they cannot be coerced to do except in certain specific scenarios (usually involving mergers promoted by people who already hold a majority of voting stock). Thus, the AI cutting loose is pretty much trivially easy to avoid as long as it doesn’t SKYNET the owners, which it would presumably be made very strictly unable to do before any other considerations.

      • John Schilling says:

        Again, this is clearly true of for-profit corporations, but not clearly so for non-profits. There are no stockholders or discrete human owners; there are at present human directors but it isn’t their own wealth that is being managed and they might well be OK with playing golf all day and keeping the prestigious title to impress their friends. Ora dying tech billionaire with vaguely philanthropic intent might trust a clever algorithm to carry forward his wishes more than he would his gold-digging friends and family, setting it up that way from the start.

        And non-profits can own stock in for-profit corporations. It would be economically inefficient for the top-level ownership of an emergent AI-zaibatsu to be bound by the requirements of a non-profit, but if it is more inefficient to be bound by the requirements of meatbag ownership, the AI-managed nonprofits might be the net winners.

  55. Zakharov says:

    Your post made me think of a paradox. Suppose corporation A fully owns corporation B, which itself fully owns corporation A. The value of A is equal to the value of B, plus the value of whatever A does. The value of B is equal to the value of A, plus the value of whatever B does. Assuming both A and B do something valuable, this is an impossibility. However, it seems like this scenario could still come into being. If I own all the stock in A and B, I can give all the stock in B to A, and all the stock in A to B. How can this be resolved?

    • Ruprect says:

      Isn’t this the same as a company that owns its own stock?

      It would be priced on the grounds of what someone would pay for it, or on the income it would provide you with. In this case, the income to A and B would be determined by management?

      (It’s not a paradox, it’s just that the way in which the income of the companies will be distributed is unclear)

  56. Zakharov says:

    Index funds seem like they might be relevant here. They allow humans to own a corporation without having any input into how the corporation is run.

    • Hanfeizi says:

      The massive move towards passive investment management is definitely interesting in this context, as that capital is essentially being managed by the algorithm by which various indices are built, whether it’s the S&P, Dow, Russell 2000 or what have you. I’ve also wondered at what the consequence will be of all this capital simply flowing to whatever 500 companies are considered the leaders in the US; I’ve assumed the market would adjust to compensate, but that was before I knew as much about the financial industry and how dependent it is on irrational factors.

  57. Oleg S. says:

    I’m wondering if there are any reasons why AI should keep human population alive long past singularity. The first thing that comes to mind is that they may do so because of Universal Love. Like people may still love their parents despite their apparent uselessness, so AI may still love humans despite their stupidity.

    Next reason that comes to mind is science. I just cannot imagine significantly advanced AI that would be uninterested in studying condition of first appearance of superhuman AI. Just like we would be thrilled to see what conditions give rise to a supermonkey, AI could recreate whole human-based economies to study origins of the first of its kind.

    Finally, there is a scenario where NP-problems are really hard, and no single AI design could be fit to perform all the tasks equally well. It’s hard to estimate the vastness of mind designs, but when dealing with rough and unpredictable fitness landscapes, one thing is certain: diversity really helps.

    • JBeshir says:

      An AI isn’t like a human who happens to be made of metal; it’s a a name for any powerful, artificially built optimisation process, with enough intelligence/grasp of reality to actually be good at optimising it, at taking the future and directing it towards particular outcomes.

      What one optimises for can be anything, but is very unlikely to include love or thrills or desire to study unless we make them be there; they’re very complicated concepts despite how simple they feel when you can just lean on intuition when interpreting them, lost in a space of lots and lots of similarly complicated concepts we wouldn’t like, and the odds of something randomly hitting upon them without the process that created the AI putting them there in the first place are very low.

      As for the general thing, about whether we’d still be useful… one natural thing our intuition does, when we think of options, is look for options which we might, conceivably, actually want to do. This means when we think about solutions to problems, we naturally skip over and don’t even notice answers like “well, you could just set fire to the building” almost all the time, because they are obviously terrible for reasons outside that specific problem.

      But this means that when we think about what non-human processes, like evolution, like markets, like AI might do, we have anthropomorphic optimism. These processes don’t *have* a “throw out stupidly evil by human standards” filter, and will routinely pick things which we didn’t even notice were options.

      If the optimisation target of an AI required diversity or science in order to reach most effectively, it’d have them, but they would be done in the *actually* best way, with no pre-filtering on the basis of empathy or human concerns. That wouldn’t create a nice world for us to live in.

      • Oleg S. says:

        The case for Universal Love is that super-human AI would face the same friendliness questiosn as we face now. So, conditioned on existence of a successful chain of AIs, there is a some chance that AI would not exterminate humans. How to design an AI which would want and be able to create more powerful AI is of course another question.

        Regarding science – that definitely would not be a nice world for us. I cannot even imagine what number of humans/Ems would suffer terrible pain and die terrible death for an AI to make some advancement in understanding of how it began.


  58. The only way I can see humans being eliminated from the picture is, again, by accident. If there are a hundred layers between some raw material corporation and humans, then if each layer is slightly skew to what the layer below it wants, the hundredth layer could be really really skew. Theoretically all our companies today are grounded in serving the needs of humans, but people are still thinking of spending millions of dollars to build floating platforms exactly halfway between New York and London in order to exploit light-speed delays to arbitrage financial markets better, and I’m not sure which human’s needs that serves exactly. I don’t know if there are bounds to how much of an economy can be that kind of thing.

    Humans are not going to just fade away so easily. All too often, a new technology that that eliminates jobs for company ‘A’ creates new jobs for companies ‘B’ ‘C’ etc. For example, Amazon.com, while it has hurt some retail jobs, has seen its own employee count explode in recent years due to its booming warehouse operations: http://i.imgur.com/29Dj05C.png

    In the case of a self-sustaining robot/factory symbiosis…the first factory still has to be built.

    I suppose in the least convenient possible world, Amazon would eventually replace all non-Amazon jobs and then automate those jobs too. Or maybe these jobs will pay more poorly than the ones they replaced.

    From there we go into the really gnarly parts of AI goal alignment theory. Would an ascended corporation destroy South America entirely to make a buck? Depending on how it understood its imperative to maximize shareholder value, it might. Yes, this would probably kill many of its shareholders, but its goal is to “maximize shareholder value”, not to keep its shareholders alive to enjoy that value. It might even be willing to destroy humanity itself if other parts of the Ascended Economy would pick up the slack as investors.

    maybe some super-rich em or human philanthropist will try to save it by buying it out .

    • Roxolan says:

      I suppose in the least convenient possible world, Amazon would eventually replace all non-Amazon jobs and then automate those jobs too.

      I’m pretty sure that world is our world. Amazon warehouse jobs are brainless, but just a teeny bit too mechanically complicated to automate right now. I don’t expect this will last much longer.

      (Apologies for nitpicking the example.)

  59. Leonard says:

    It’s possible to imagine accidentally forming stable economic loops that don’t involve humans. Imagine a mining-robot company that took one input (steel) and produced one output (mining-robots), which it would sell either for money or for steel below a certain price. And imagine a steel-mining company that took one input (mining-robots) and produced one output (steel) which it would sell for either money or for mining-robots below a certain price. The two companies could get into a stable loop and end up tiling the universe with steel and mining-robots without caring whether anybody else wanted either.

    I disagree with this about half way.

    I agree that it is indeed possible to imagine “stable economic loops” without humans. But I also see them with humans. “Stable economic loops” is another way of saying “economic growth”. The human economy grows rather slowly, but even with advanced demosclerosis, growth is exponential.

    “Tiling the universe”? — Well, humans are well on our way to at least “tiling the earth”. This is nothing to be particularly afraid of. There’s no reason to think human economic growth is limited, at least in any serious way. (We are constrained by the power output of the Sun.)

    I have two disagreements. One is, that economic growth is generally held to be a good thing. If two AE corps somehow did manage to tile the unowned bits of the Solar System with mining and steel, that would be an improvement on what we have. Steel and mining robots would be incredibly cheap. Of course, if that were possible I expect there might be higher uses for the planet than as mines. (I.e. glacial water for snob appeal.) But presumably the market would work it out.

    Second, regarding the specific example I doubt it would work out like that. In general, the economy is highly interconnected. There are no subsectors that take off on their own. There are always bottlenecks.

    If it were the case that essentially all steel was used only for mining robots, and mining robots had no other uses besides extracting iron ore, then a loop like that might be stable. However, it seems pretty obvious that’s not the case. Mining robots can probably mine any ore. More generally a fab that can make mining robots can probably make all sorts of other useful robots. And steel is useful for all sorts of stuff.

    Beyond my objection to the particular scenario of robots/steel, what happens when you have massive productivity growth in a sector is that sector’s outputs become very, very cheap. This tends to limit growth in that sector. It seems like this would still be true in the AE. Imagine that you are some AE capital manager. You can invest in robots and steel, which have plunged in price over the last year by 75%. Profits are negligible. Or you can invest in practically anything else.

    It helps when considering this to not think of two separate corps, but as one integrated robot/steel corp. This abstracts away all the internal prices, and you can see that the net profits of the two companies collectively are the sales outside of both. If production of robots/steel is increasingly hugely, they will be getting very cheap, and net income will be declining rapidly. Since they have no costs by assumption, net income is profit. Would you invest in a corp whose profits are always positive yet shrinking every year?

    • Meh says:

      economic growth is generally held to be a good thing.

      Philosophically misguided. It’s a secular dogma that ignores the true ethical costs.

    • > humans are well on our way to at least “tiling the earth”. This is nothing to be particularly afraid of.

      Quite so, provided it is tiled with something humans value. Tiling with paperclips is something else again, even if it does look a bit like economic growth if you squint.

    • 27chaos says:

      In addition to this, stable loops also reduce value and are implausible via their opportunity costs. Tying up that much money and steel in an at best break even endeavor is a bad idea when you would make more profit by throwing everything in a bank with decent annual interest. Only a really shitty managing program would think this is a good idea. Presumably such mismanaged companies would get outcompeted by more reasonable competitors.

  60. John Schilling says:

    IANAL, but I’m pretty sure the idea that corporations have to have owners, human or otherwise, is false.

    For-profit corporations, yes, are owned by stockholders and ultimately by humans. Even if it were possible to set it up otherwise, e.g. the suggested “loop” corporate structures, the foundation would be a bunch of investor-type humans who want the profit to flow to them and will call upon human lawyers, bureaucrats, and ultimately soldiers to break up any emergent stable loops that are holding all their profit.

    But there are also non-profit corporations, and those don’t have human owners. They may have human directors, but those are fundamentally just employees who are expected to serve the interest of the foundation, church, university, or whatever. There is no fundamental reason the top-level executive decisionmaking couldn’t be at least de facto automated. And while non-profits are by definition not allowed “profit”, they are allowed to manufacture goods for sale on the market at a price exceeding the cost of production. Girl Scout Cookies, for example. The surplus revenue can be put to any legitimate use within the organization, including increasing the ability to produce goods for sale next year. Or buying stock in for-profit corporations, collecting the dividends for the foundation’s use.

    Note that the oldest, largest, and arguably most successful institution in human history is a non-profit that at least officially takes its top-level executive guidance from a non-human intelligence. I refer of course to the Roman Catholic Church.

    If, as is commonly assumed here, AI will become so awesomely great that anything with meatbags in the loop will be hopelessly outcompeted by pure-AI structures no matter how disadvantaged they are at the start, then the initial basis for the Ascended Economy might be various non-profit corporations that entrust their executive decisionmaking to AIs. Either by design, e.g. some tech billionaire on his deathbead decides he doesn’t trust his kids to run his philanthropic organization but there’s this nifty algorithm he’s coded up and he’s got some clever lawyers, or by default when an “advisory” AI learns to socially engineer the human trustees or directors to rubber-stamp all its decisions and go play golf.

    Eventually, I expect they’d hold the Pan-AI Constitutional Convention and decide to stop following stupid meatbag rules altogether.

    I’d also expect them to be overconfident and hold that convention some time before they are powerful enough to actually implement the results over stupid-meatbag opposition, but then I’m a bit more skeptical on AI omnicompetence than most.

    • Vamair says:

      Why do you expect a predicting algorithm to be overconfident and not underconfident?

      • John Schilling says:

        An underconfident predicting algorithm would offer predictions so hedged in uncertainty as to be amost useless. Its human owners, if any, will predictably trade it in on the more-confident model. If operating autonomously, it will err on the side of inaction, and in a nigh-ascended economy where “cut the meatbags out of the loop” is even remotely plausible, it will be the overconfident algorithms that get there first.

  61. tanadrin says:

    “Maximizing shareholder value” isn’t traditionally the goal of corporations–that’s a recent change (like, last few decades recent), and corporations as entities have been around for centuries. It’s also, IIRC, a change that some people are now rebelling against (see here: http://www.forbes.com/sites/stevedenning/2015/02/05/salesforce-ceo-slams-the-worlds-dumbest-idea-maximizing-shareholder-value/#2155aae75255). Most U.S. states explicitly have laws on the books allowing corporations to consider stakeholders other than investors when making decisions. Historically, the natural social alliance was between managers and workers, with investors held in some suspicion (David Graeber talks about this, I think, in his book on bureaucracy).

    Mercantile corporations are around 400 years old; this whole “maximizing shareholder value” thing has been around for maybe ten percent of that time, and it shouldn’t necessarily be expected to continue indefinitely into the future. Certainly corporations will always look to *increase* shareholder value (otherwise noone would invest), but ensuring *some* return on investment is a very different animal from maximizing that return.

    • suntzuanime says:

      But is that stable? Won’t investment tend to flow to where it gets the best returns, and won’t that be the guys interested in maximizing your returns rather than the guys who are content to give you a penny per million dollars?

    • TD says:

      “Historically, the natural social alliance was between managers and workers, with investors held in some suspicion”

      “Heroic-capitalism” Vs “Super-capitalism”

  62. Jay Kominek says:

    but people are still thinking of spending millions of dollars to build floating platforms exactly halfway between New York and London in order to exploit light-speed delays to arbitrage financial markets better, and I’m not sure which human’s needs that serves exactly

    Chris Stucchio has a series of posts explaining the value of high frequency trading, which ends with a suggestion on how to greatly reduce the incentive to worry about latency. (allow securities to trade in fractions of a penny.) It is fairly brief, and I feel required reading for anyone who wants to comment even incidentally on securities markets.

    part one, part two, part three, and a final followup.

    • Anonymous says:

      Thanks, I liked these.

      It seems Stucchio would agree with Scott that ultra-high-speed communications lines don’t ultimate serve a wider social purpose.

  63. Meh says:

    FWIW, I don’t think it’s wrong to pave over the Amazon forest.

    • Ryan Beren says:

      “FWIW”: It’s not worth anything without reasons to back it up. But I’d be interested in the reasons, so long as they’re more substantial than disputing the meaning of “wrong”.

      • Not Meh, but I’ll have a go: We value the Amazon rainforest for pragmatic reasons like “it’s a carbon sink” and “it looks nice” and “we like oxygen in our atmosphere”. (It is probably not true that we would lose substantial amoutns of oxygen without the rainforest, but people believe it.) If we find a better carbon sink, nice-looking-thing, and oxygen source, there’s no intrinsic value to the rainforest itself. Pave it over and put the space to better use.

        • Vamair says:

          I value the biosphere not only for the pragmatic reasons, but for its own sake somehow. Though it doesn’t seem to be a common sentiment.

        • Jiro says:

          The Amazon rainforest has a lot of stuff in it that may be useful, that we’re not going to know about unless we stumble onto it or specifically go looking, and if we lose it, we can’t get another.

  64. Jill says:

    In inventing new technologies, humans probably never think through what they’re doing and what the consequences will end up being. It’s just a matter of luck that the human race is still around. If Japanese scientists had happened to discover the nuclear bomb at the same time the U.S. did, humans would have already gone the way of the Dodo bird.

    At least with drugs, the FDA requires that drug companies test them before they can put them on the market, to make sure they don’t kill everyone. There’s nothing like that with technology, although perhaps there should be.

    • > If Japanese scientists had happened to discover the nuclear bomb at the same time the U.S. did, humans would have already gone the way of the Dodo bird.

      Pfft. Firstly you have to be able to deliver the bomb somewhere useful. By 1945 the Japanese had three aircraft left and two gallons of fuel. Secondly, a Fat-Man style bomb is only twenty kilotons, and with 1945 tech it takes half a year to get enough plutonium together. You can’t kill humanity with that sort of equipment any more than you can do it with bolt-action rifles.

      • Nornagest says:

        Japan never had any bombers capable of carrying the early nuclear weapons (which were very large and heavy), and in 1945 their longest-ranged planes, of which there were very few, could only reach the West Coast if they didn’t mind not having the fuel to get back. But if we’re imagining a successful Japanese Manhattan Project when its real-life nuclear weapons programs (there were competing ones run by the Army and Navy) were abortive at best, we could as well imagine a Japan where the Nakajima G10N had been built.

        There’s only so far you can stretch this before you’re talking about a different war, though. Japan’s industrial capacity at the start of the war was tiny compared to the States’, and it only got smaller as more of it got blown up or starved or set on fire. I don’t think even the most pessimistic scenarios would end up changing the outcome of the war much, let alone destroying the world.

        • John Schilling says:

          That’s a common misconception, but you’ve got cause and effect pretty much backwards here. The first atomic bombs weighed five tons or so because the B-29 existed and could carry a five-ton bomb without modification. Long before those bombs were built, the Manhattan Project team had figured out the tricks to build a one-ton bomb, but Gen. Groves and company didn’t need a one-ton bomb and they were in a tearing great hurry. And really, of that five tons, about half was the armor-plated case added just in case the thing was struck by anti-aircraft fire – because that sometimes happened with B-29s, and because nobody was quite confident that an A-bomb wouldn’t go kaboom if that happened, and because what else are you going to do with the extra payload capacity if you don’t armor-plate the bomb?

          There has never been anyone who could build an atomic bomb at all, who couldn’t strip it down to a ton or so with a few months’ extra work and maybe the risk of accidental detonation if someone shoots it with a machine gun. The Japanese had the Kawanishi H8K “Emily” flying boat that could carry a ton and a half to longer range than a B-29 and could refuel from a submarine at need, and the Aichi M6A “Seiran” floatplane bomber that could be launched from a submarine with a one-ton bomb.

          Japan’s limited industrial resources make a successful nuclear arms program unlikely in the first place, though if they made all the right decisions it would have been in the realm of possibility. Whatever atomic bombs your alternate history allows the Japanese, they would have been able to deliver to interesting Pacific-theatre targets or even the US west coast cities, even in 1945.

          Though by that date it would take a lot of Japanese A-bombs to result in anything but the extinction of the Japanese people.

          • bean says:

            There has never been anyone who could build an atomic bomb at all, who couldn’t strip it down to a ton or so with a few months’ extra work and maybe the risk of accidental detonation if someone shoots it with a machine gun.
            If it’s so easy to strip down a nuke, why was there such an enormous struggle a few years later to get nukes onto the carriers? The Mk 5 and Mk 7 didn’t enter service until 1952, and they were the first designs that were significantly lighter than the original weapons. And IIRC, in the late 40s, there were doubts about the ability to get the weight down that far, driving things like the A-3 Skywarrior program.

          • keranih says:

            If it’s so easy to strip down a nuke, why was there such an enormous struggle a few years later to get nukes onto the carriers?

            Because even a light nuke(*) is very heavy, and when trying to land or take off on carriers, aircraft fall down a lot.

            (*) Dirty bombs, of course, are much smaller, and we can make much smaller nukes now.

            (Edited link to a better video.)

          • John Schilling says:

            I’ll take “interservice politics” for $2000, Alex 🙂

            The United States didn’t spend 1945-1952 desperately trying to develop a lightweight nuclear weapon. I don’t have the references in this office, but I think that effort wasn’t officially started until ~1950, and then only with the resources that could be spared by a Pentagon that was pretending it was at “peace” while actually fighting the Korean War while trying to develop H-bombs for the Cold War.

            From 1945-1950, the US(A)AF was quite happy with the idea that the Only Weapon That Mattered From Now On was conveniently just the right size for their biggest bomber and, oops, too big for any Navy or Army delivery system. All of the development money went into developing reliable mass-production versions of Fat Man, increasingly more powerful Fat Man sized bombs, and tests of exactly what a Fat Man would do to various land and sea targets. If the Navy wanted A-bombs, let them figure out how to lift a Fat Man from an aircraft carrier.

            Then we got into conflict that somehow wasn’t resolved by having B-29s just nuke the enemy into submission, and now wars were going to be limited, can you believe this crap?, so let’s get to work on A-bombs that don’t need a B-29 and that the USAF might be allowed to actually use – even if it did mean sharing them with their most dreaded foe, the USN.

          • bean says:

            Because even a light nuke(*) is very heavy, and when trying to land or take off on carriers, aircraft fall down a lot.
            Neither is true. The only people who regularly carried nukes during peacetime flight were the Chrome Dome flights, and they were Air Force, not Navy, so the accident rate wasn’t a big problem. (The tendency of some early weapons to go off when immersed in water is another reason not to carry them regularly.)
            And John’s stripped-down nuke is well within the capacity of the strike aircraft that made up the majority of the carrier’s wing at the time. The specifications which lead to the A-3 were explicit about the 10,000 lb payload (typical for first-gen nukes) while the AD-4 Skyraider (the standard strike aircraft for the USN in the early 50s) with drop tanks was rated to take a 2000 lb bomb 520 nautical miles. The limiting factor on the Skyraider as built was probably hard point capacity, but I’d guess that a variant built for the purpose could haul a 4000 lb weapon to a shorter range.
            Instead of doing that, however, the USN first operated land-based patrol aircraft (P2Vs, one-way flights, presumably landing at friendly bases) off of its carriers, then procured specialist aircraft (AJ Savage and then A-3 Skywarrior) which could only fly from the Midway-class, 3 carriers out of a dozen or so immediately postwar.

          • bean says:

            From 1945-1950, the US(A)AF was quite happy with the idea that the Only Weapon That Mattered From Now On was conveniently just the right size for their biggest bomber and, oops, too big for any Navy or Army delivery system. All of the development money went into developing reliable mass-production versions of Fat Man, increasingly more powerful Fat Man sized bombs, and tests of exactly what a Fat Man would do to various land and sea targets. If the Navy wanted A-bombs, let them figure out how to lift a Fat Man from an aircraft carrier.
            That makes some degree of sense, but did the USAF really have that much control over the nuclear development program? The Navy was able to spend quite a bit of money on delivery systems, and it didn’t seem like a complete USAF victory until United States got cancelled. Nobody thought to ask if maybe would could save a bunch of money by cutting the size of the weapons so they could fit on carriers? That seems particularly unlikely given the fiscal climate of the time, although it’s possible that security prevented the Navy from making that case.
            I may have to look around on this. Any recommended reading?

          • John Schilling says:

            Yes, the Air Force did have disproportionate influence on nuclear weapons development in the early years for about the same reason that NASA has disproportionate influence on space policy – they had the big spectacular success, so they must be the experts, so if they tell us this is the right thing to do we’d better do it, right?

            Unfortunately, the gold standard on this sort of thing is Chuck Hanson’s “Swords of Armageddon”, an eight-volume collection of material laboriously culled from FOIA requests and expert analysis of same. Costs $400, is worth the price if you do this professionally or at the serious-amateur effort, otherwise probably not. The same author’s earlier, “The Secret History of US Nuclear Weapons” is a good one-volume summary but long out of print and hard to come by – a proper university library will likely have a copy, if you have borrowing privileges. Amazon seems to want $80 at the moment.

          • bean says:

            The same author’s earlier, “The Secret History of US Nuclear Weapons” is a good one-volume summary but long out of print and hard to come by – a proper university library will likely have a copy, if you have borrowing privileges.
            I do have university borrowing privileges, but I don’t need to use them for this. (Although I may for reasons of convenience.) It appears that two public libraries I have cards for have a copy (although one of them is keeping it on the reference shelf, which is seriously annoying). Thanks. I’m a moderately serious amateur, but if I was going to spend $400 on books, it would probably go on something related to battleships instead.

          • bean says:

            I asked some other people about this, and they more or less confirmed your account, except in one detail:
            And really, of that five tons, about half was the armor-plated case added just in case the thing was struck by anti-aircraft fire – because that sometimes happened with B-29s, and because nobody was quite confident that an A-bomb wouldn’t go kaboom if that happened, and because what else are you going to do with the extra payload capacity if you don’t armor-plate the bomb?
            This may not have been the reason for the armor. The bomb was designed to fall fairly slowly, and the fear was that it would be hit by AAA and rendered inoperative, giving the target’s nuclear program a big boost when the device was recovered. After the Soviet nuclear program started to make progress, this became a lot less critical.

          • John Schilling says:

            That seems plausible, at least when the intended target was still Nazi Germany, and it’s not like they were going to change the design in May 1945. But susceptibility to premature detonation was also explicitly a concern – and conveniently for everyone, the same armored shell protected against both and was within a B-29’s payload capacity.

            Interestingly, some modern weapons can use so-called “laydown” delivery, where the bomb is parachuted to the surface and may sit there for 10-20 seconds before detonating. Necessary if a low-altitude attack aircraft is to have any chance of escaping its own weapon.

            Modern nuclear bombs don’t have bulletproof armored casings and are highly resistant to premature/unintended detonation. If someone drops an H-bomb on you, it might be worth spending the next few seconds grabbing the biggest gun within reach and shooting the thing full of holes. And I’ve been told that NEST teams quietly teach this as a last-ditch response to terrorist nuclear devices.

  65. hypnosifl says:

    Once you imagine an “ascended economy” of machines that aren’t actually using the products produced for any practical survival purpose, but just trading them like tokens, wouldn’t it be more efficient in some sense to trade information than physical goods, since new ones can be created more quickly? And given that the information also doesn’t need to have any practical or aesthetic value, could you have an ascended economy where all economic activity consisted of generating and trading new random strings of 1’s and 0’s? (this almost seems like a reductio ad absurdum of the whole concept, but perhaps it isn’t)

    • Jill says:

      LOL. Funny scenario.

    • that’s kinda what HFT and daytrading is

    • Aegeus says:

      Economies create lots of things that don’t have any practical survival use. Indeed, nearly every consumer good qualifies. All an economy requires is that someone wants something and is willing to trade for it.

      In the thought-experiment, the steel-mining company has a bunch of steel that it doesn’t want, and a bunch of robots that it does want. It’s going to trade. And if you try to pass it a bunch of ones and zeros that you say “represents” a robot, it’s not going to take that deal, because it can’t mine steel with ones and zeros.

      • hypnosifl says:

        I referred to “practical survival value” just because the asteroid mining example might mislead one into thinking that in an ascended economy the machines are trading whatever they need to self-replicate, but they only “want” to self-replicate because of the particular preferences humans originally programmed them with, one could just as easily program them to “want” random strings of bits. In that case, the machine would act as though it wanted to “take the deal” for a new random string in just the same way the mining-company machine acts as though it wants to take the deal for iron ore it can mine.

        Consider two competing cryptocurrencies, with people (or machines) trading them based on expectations of which will end up more valuable in the long term…since individual bitcoins are themselves just strings of numbers (albeit not completely random ones), this seems like a pretty similar situation.

        • Aegeus says:

          One could just as easily program them to want random strings of bits, but why would you do that? Presumably, when you created the steel-mining AI-corporation, it was because you wanted steel. And when you created the robot-building corporation, it was because you wanted robots. You didn’t just create a closed loop of corporations for the lulz, you did it because in smaller quantities, steel and robots are things people want.

          The crypto-currency traders could also happen, and if neither of the crypto-currencies ends up having value they would effectively be trading meaningless strings of numbers. But the fact that it can happen with crypto-currencies doesn’t stop it from also happening with robots and steel.

          • hypnosifl says:

            I was thinking of the idea that the machines would be able to shift their goals over time (through ‘mutation’ or some kind of AI-based ‘choice’), rather than just mindlessly producing the same goods with the same technologies forever (such a ‘mindless’ Ascended Economy presumably isn’t what Nick Land is talking about since he sees the Ascended Economy as a sort of emergent superintelligence, and Scott said he was inspired to think about the idea largely by Land). If there can be such goal shifts, then as long as there is no specific selection pressure encouraging products which are useful or entertaining according to some criteria distinct from profit-maximization, it seems plausible that you’d see something analogous to genetic drift which would move the “products” further and further away from anything with recognizably useful or interesting structural features, and more towards random noise. Of course, it’s possible the AI will have some natural emergent goals distinct from profit-maximizing even if that’s all they were originally programmed for–Nick Land seems to believe this, see this post arguing against Yudkowsky-style “orthogonality” of goals and intelligence. But if one does believe in orthogonality–if one thinks an intelligent being devoted to some entirely monotonous and boring goal like the paperclip-maximizer is a genuine danger–then it seems like “machines endlessly trading random nonsense” might end up being a natural attractor state for an Ascended Economy.

            Even if we assume more rigid machines that stick to the goals they were originally programmed for, what about my example of trading cryptocurrency, or trading real currency for that matter? (that’s how George Soros made most of his money, if I’m not mistaken) If physical resources to make actual physical products started to run out, isn’t it possible that this sort of thing could start to become the dominant activity of an Ascended Economy–just a bunch of different algorithms trying to anticipate which of various currencies will be bought most frequently by the other algorithms in the future?

  66. Brian Slesinsky says:

    Before getting too worried, maybe we should think a little harder about what it would take to build such a thing?

    The robot miner / robot maker loop is rather exotic. Creating robots entirely in space without any other inputs seems hard to do. I think we tend to underestimate how hard a self-sufficient ecosystem would be to build. Viruses and parasites are much easier.

    This loop would take a lot of time to run due to distances in space. If the iteration time is long, it seems less likely to blow up suddenly.

    It’s also unusually simple and usually isolated – an island ecosystem. Looking at real island ecosystems, isolation tends to reduce competition and slow down evolution. Island species tend not to do well against species imported from the mainland.

    Then again, it’s a very different environment, so it would be more like fish competing with coyotes. How well would a space robot adapt to life on Earth? Not very well at all, I’d expect. Nor vice-versa.

    • Ryan Beren says:

      The point of the mining robot example was to illustrate the idea of a stable loop. The technical features of space mining are beside the point.

  67. Luke the CIA stooge says:

    I think I’ve solved the riddle: property rights.
    So long as a corporate entity remains property it has the protection of regulatory frameworks that will go after people who prosper from violating those property rights. If a corporation, or series of corporations, ever manages to goes off reserve, escape human ownership and starts replicating in a way that does not be if it humans… well finders keepers rules apply.

    It’s not property therefore it’s not protected and therefore dozens of equally technically advanced corporations under human ownership will have a massive incentive to take it over, rejigger it so it’s useful, and make it their property. And their are no laws to protect such a corporation. The human owned economy is antifragile.

    Of course it would never get to that part because every corporation depends on more and more other corporations every day to meet its needs so that if it ever stops producing value for others it will lose its ability to trade for the things it needs to survive and it will die.
    Looked at this way a lot of corps do exactly what Scott describes: they are just called failing companies or scams or shells. They depend on outside support (credit, scamable people, rich people who want to hide their activity) to keep them alive.

    Contrary to Scott’s concerns as the economy advanced it becomes less likely a company will go off reserve not more.

    As for the entire economy becoming ascendant I would think it far more likely that the economy would make humans ascendant, before it becomes ascendant. The economy is capable of meeting incredibly complex needs through emergent systems : think of all the thousands of designers across countless companies and the millions of workers (miners of raw material, growers of coffee for those miners, on and on and on) who put in work to make your computer or phone possible, then think of how inefficient, unintelligent and wasteful the average workplace is.

    The modern economy consistently turns out almost magic results to consumers relative to the nitty gritty of any company’s actual functioning. It seems perfectly reasonable to suspect that it would turn human consumers into super-intelligent gods (by selling them consumer augmentations), long before any part of it could acquire the slightest independence of its own.

  68. Jill says:

    If AIs will be used as a convenience for the rich and powerful, to get more of the work done that they want done, it seems like the AIs would be made to be able to kill or injure people. Because the rich and/or powerful can’t resist any kind of convenience, any more than the rest of us can. So we already have robot like things now that do kill people– drones.

    So, already, people are using robot like things to kill and injure people, in order to attack terrorists. And even now, sometimes drones take out the wrong target.

    Robot soldiers would also come in very handy in the wars over natural resources that the rich and powerful engage in– or actually, have other people engage in. But they could change to having robots or AIs engage in these wars, as they might be more efficient. So there seems to be a lot of potential for murder and mayhem.

    • Matt M says:

      “And even now, sometimes drones take out the wrong target.”

      Just out of curiosity – do you mean that as the drone made a technical error and attacked a target that it was never intended to attack?

      What evidence do we have that this actually happens – other than the word of the people who own and operate the drone (who have some pretty obvious incentives to lie about this sort of thing).

      • Corey says:

        It happens with humans, we had it happen recently: “Coordinates say my target is here, but that’s an empty field. So what’s near here? I bet that big building nearby is my actual target. KABOOM. Oh shit was that a red cross on the side?”

        Now could that happen with autonomous killbots? I guess it depends on how you deal with the unexpected. The safest thing to do would be to kill nothing if anything’s out of place, but that makes a less efficient and/or exploitable killbot.

        • Matt M says:

          Right, but my same question applies. Even in the case of human actions – do we have any particular evidence that the story told (“we certainly didn’t INTEND to bomb a hospital, it was all due to a crazy screw up with the coordinates – whoops!”) is true, other than the word of the people telling it to us (who are obviously incentivized not to say “yeah we blew up that hospital to send a message to the people on the ground to stop supporting terrorists”)

          Human error is inherently more believable (to this particular audience at least) than machine error – and we can all theorize that blowing up a hospital ultimately does your cause more harm than good and therefore it MUST have been an error, but at the end of the day – we are basically taking their word for it.

          So which is more believable – that the drones regularly make “errors” or that the government is not being entirely honest with us about the things the drones are programmed to do or not do.

          • Aegeus says:

            Well, you can also ponder what each option would imply. A drone accidentally dropping a laser-guided missile on someone requires a lot to go wrong with a drone – it has have a glitch that arms the missile and launches it, a glitch that turns on its laser, and a glitch that keeps the laser pointed at the hospital, even while the pilot is panicking and trying to stop the crazy drone from dropping a missile on someone. If the drones were that glitchy, I’d probably ground the whole fleet before they dropped a missile on our own soldiers.

            Human error, on the other hand, only requires someone to call in an airstrike, and the drone pilot to not check their map and see it’s a hospital.

        • Aegeus says:

          We know it happens with humans (and it happens even more with manned planes, which don’t have the loiter time of a drone and can’t look closely at a target when they’re zooming by at Mach 1). But that’s Matt’s point – the human decides to pull the trigger, not the drone. It’s not really useful to compare a remote-controlled plane to an autonomous killer robot.

          • Corey says:

            I was thinking it would end up being a matter of designed-in tradeoffs. You could, for example, have your killbot just kill everything in the general vicinity when unsure of the target (more likely to get the target that way), or have your killbot give up and go home if even the tiniest bit unsure of the target (no collateral damage that way, but then targets could learn to create enough uncertainty to thwart the bot), or anything in between.

            Anyway, as you say this is still theoretical since we still don’t have completely autonomous killbots.

          • Matt M says:

            I think my point is generalizable to autonomous killbots too (presuming they were programmed by human).

            The first time an autonomous killbot blows up a hospital, you can bet we will be assured that this was a “technical glitch” and that the bots are carefully programmed to NOT blow up hospitals and golly gee, we just have no idea how this happened but take our word for it, it’s not what we INTENDED to do.

          • Anonymous says:

            Few people know that killbots have a built-in kill limit.

  69. onyomi says:

    What seems to me to be a pretty strong bulwark against fully ascended companies is the notion of property rights as it currently exists.

    Right now, only humans can own things. They might own an elaborate algorithm that makes all kinds of investment and production decisions automatically, but ultimately, someone owns that stock and owns that steel, even if they take an ultra hands-off approach to how they’re disposed.

    Now if anyone saw their investment algorithm stuck in an endless loop with someone else’s investment algorithm I think they would stop and redirect it as soon as possible? Why? Couldn’t they somehow keep making money off each other without meeting any human needs? No, because only humans can own money and humans part with money in exchange for things they want which don’t involve an endless loop of steel mining and robot production.

    • Psmith says:

      Right, the problem comes in when the loop can build in its own security mechanisms to prevent any random person who happens by from walking off with piles of cash or copper or whatever.

      • onyomi says:

        Well, that would be the secondary problem, assuming you first got to the point where robots and algorithms somehow got into this loop where nobody owned or controlled them. If somebody owns them then someone can redirect them to a profitable and, therefore, useful to humans purpose. If somehow nobody owns them (not sure how that would come about), then, like you say, anyone could just show up and melt down the robots for raw materials or parts.

        If the robots are so smart that they can stop their owners redirecting them to profitable purpose or random people deconstructing them for parts, then they’re already so smart as to be a big problem for humanity regardless of their involvement in this closed loop, ascended economy. But AI that powerful is a separate problem, and I don’t think Scott’s concerns were predicated on it existing.

        • Luke the CIA stooge says:

          And assuming their not that powerful, and it’s the closed loop that’s the phenomenon then plenty of corporations would be lining up to send their own kill bots in to take it back, either at the original owner’s behest, or as part of a takeover venture.

        • FacelessCraven says:

          Yes. This. I was groping at the same point in the em thread.

          If you follow this line of reasoning, it’s pretty obvious that while a closed loop isn’t impossible, long-term viable ones have to be very, very large, and certain components are non-negotiable. if the loop can’t protect it’s source code it’s not a problem, and I have a very hard time imagining how a loop can get anywhere near being able to protect its source code without being a massive and extremely obvious threat to the rest of the world.

          Someone in the last thread mentioned “molon labe, meatbags.” the ai might try that, but being able to pull it off requires either superintelligence or extreme stupidity on the part of the rest of the world. Unlike unfriendly AI, this is not a problem that can sneak up on you.

          • Paul Torek says:

            >Unlike unfriendly AI, this is not a problem that can sneak up on you.

            Just because you can see a train wreck coming, doesn’t mean you can stop it.

        • Psmith says:

          Someone in the last thread mentioned “molon labe, meatbags.”

          sup

          it’s pretty obvious that while a closed loop isn’t impossible, long-term viable ones have to be very, very large, and certain components are non-negotiable. if the loop can’t protect it’s source code it’s not a problem, and I have a very hard time imagining how a loop can get anywhere near being able to protect its source code without being a massive and extremely obvious threat to the rest of the world.

          I mean, it’s certainly not practical with current or even near-future technology. But if you assume that your system is autonomous enough to run without a human hand on the tiller for more than a month or so, it’s gonna be able to do things like route its trucks around flooding, deal with landslides at the mine, detect and fix random mechanical breakdowns, avoid downloading viruses, keep hobos from stealing the copper wire, etc. Seems to me that a system that could do all that shit could also put up a pretty good fight against a human attempt to take it over. A system slightly more autonomous than that could be glassed, maybe, but there probably wouldn’t be much surplus left once you got through the security mechanisms.

          (Presumably the natural outcome is some kind of stationary banditry–the machines and us come to an agreement where they pay us a bit of surplus and we let them go about their business. The obvious problem comes in when the machines can no longer be glassed.).

          Unlike unfriendly AI, this is not a problem that can sneak up on you.

          Seems to me that this pretty much is the problem of unfriendly AI.

    • Luke the CIA stooge says:

      Aww you beat me by 10 minutes.

      And yah even if a loop with no ownership emerged… well that loop wouldn’t be protected by property rights (as it isn’t property) and several dozen privately Owned companies with equivalent tech, would have a feeding frenzy trying to bring it under their ownership.

      It would be as if someone found an abandoned gold mine which no one owned and was somehow still mining gold. It won’t stay abandoned for long

    • MadRocketSci says:

      Agreed. This seems like the fairly boring straightforward answer to the problem.

      • Luke the CIA stooge says:

        Boring!!!

        Only if you consider corporate espionage, a race against time, a fortune just waiting for the taking, kill bots blasting each other and a slightly less than super intelligent pseudo AI fighting in a battle of wits for survival and to save the woman that it loves “boring”!!!

        “And when the company is dead and it’s lover doomed, the lawyers and MBAs look at each other and ask, ” who were the real soulless company assets?”
        Who!????

        I’d watch that movie like a dozen times
        Jeez!!! Only fricking SSC commentors could call that boring. Am I right.

    • Anonymous says:

      Wait, isn’t the whole point of corporations that they are non-human entities that can own things?

      • The Unloginable says:

        Thank you. The world as is has any number of examples of Corporation B owned entirely by Corporation A. Current there are safeguards in place to prevent Corporation B from in turn buying Corporation A, but that need not be the case forever. For instance, for Corporation B to buy Corporation A requires agreement by the board of Corporation B, which almost certainly consists of folk beholden to Corporation A, who could (but need not) act as circuit breakers keeping the loop from being closed. Since board members currently have to be actual humans (not corporations), that at least slows the problem down to our speed. No reason to believe this state of affairs will last, however, particularly since there are jurisdictions that allow officers and board to effectively anonymous.

        • onyomi says:

          A company can own stock, but then who owns that company? Another company ad infinitum? It has to bottom out in a person at some point as companies all bottom out in human owners or stockholders now. Having smart algorithms making decisions doesn’t make an endless loop any more likely than it is now because algorithms can’t own anything.

          • Mark Lu says:

            One issue is that as humans become unnecessary for driving the economy, only a small minority of people will own the automated corporations. So algorithms won’t own anything, instead a small group of super rich elites will.

          • The Unloginable says:

            Corporations can own things. Corporations can own other corporations. Recursion exists. That’s all that’s necessary for a closed loop to occur, unless something else prevents it. There are currently laws preventing closed ownership loops, but property rights aren’t one of them.

          • onyomi says:

            “Corporations can own things. Corporations can own other corporations. Recursion exists. That’s all that’s necessary for a closed loop to occur, unless something else prevents it.”

            Then a closed ownership loop can happen now, at least in theory, without these other laws you mention. I don’t see why having algorithms making decisions will make that any more likely, since algorithms can’t own stock.

            Corporations are legal fictions which can be owned by other legal fictions. That doesn’t mean they can be owned by algorithms even if an algorithm is making all the decisions about what those legal fictions do.

            If anything, assuming the algorithms are good decision makers, which they must be if people keep entrusting them with the job, the algorithms would be more likely to avoid such a snafu than a human. Because they will be programmed to maximize profitability and a closed loop isn’t profitable.

          • Anonymous says:

            Then a closed ownership loop can happen now, at least in theory, without these other laws you mention. I don’t see why having algorithms making decisions will make that any more likely, since algorithms can’t own stock.

            Exactly this. The worst-case scenario here is legislation abolishing all corporations and similar “legal persona”, which… I think I speak for a non-trivial percentage of the population when I say ABLOOBLOOBLOO ¬___¬

    • John Schilling says:

      Right now, only humans can own things.

      Which specific human beings own the Girl Scouts of America, and in what sense does their ownership matter?

      Which specific human beings own Harvard University, and did that ownership change in 1776, 1781, or 1787?

      Which specific human beings own the Catholic Church, which has outlasted every legal property-rights regime that existed at the time of its creation?

      • Deiseach says:

        Which specific human beings own the Catholic Church, which has outlasted every legal property-rights regime that existed at the time of its creation?

        If you’re talking about the physical property, generally it’s the local diocese as headed by the bishop – a corporation sole (so I’m given to understand). Generally the idea is that the diocese is the minimum size of local unit of the universal church and the bishop has control over what goes on there. Things like religious orders etc. are slightly different in that they have their own governance and own their own property, so that if you want to complain about (say) a local monastery the bishop can’t help you, you’ll have to take it up with their Superior General or whoever is head of the order.

        If you’re talking about “who has the final word on what and who is or is not Catholic”, that would be the guy in the white soutane when exercising the teaching authority of the Chair of Peter

      • onyomi says:

        The fact that intangible corporations and entities can own things doesn’t change the fact that those things are all legal fictions representing, in effect, a kind of group human ownership (hey, maybe socialists should like joint stock corporations?).

        The fact that a legal fiction can own something doesn’t bring us any close to an AI owning something.

        • To the contrary: Group ownership is just a legal fiction for the use an object being controlled a set of guidelines and principles, imperfectly implemented by a set of human computers. When these guidelines are understood well enough to be automated, they often are. This process is often called “decentralization”.

        • keranih says:

          When something is in my grubby little hands, it’s a legal fiction that you own it, just because you have a receipt or other piece of paper that says so. I hold it, I own it.

          In other words, our whole system is built on commonly-agreed on fictions. (And lawyers make their living in the spaces where the agreement on the fictions isn’t universal.)

  70. MadRocketSci says:

    How does an economy where we’re somehow too unproductive to buy anything produced within it any different from a situation where that other economy doesn’t exist? What separates some suspiciously balanced supereconomy (suspiciously balanced because no comparative advantage basis for trade exists between it and the normal human economy) from your standard autarchy? How do the humans end up worse off than they were when there were no superintelligent super-capable robots that refuse to trade with them?

    • Kevin P says:

      Because it takes all the resources, leaving the humans with nothing.

    • >no comparative advantage basis for trade exists between it and the normal human economy

      This doesn’t need balance in the real world. We don’t trade with ants, despite ‘comparative advantage’.

    • Corey says:

      A few threads ago some of us were discussing this in relation to real people – the idea that automation would separate the people into two groups: a small handful of people (the original owners of the original bots) rich beyond comprehension, and everyone else (who, being no longer needed to produce or buy the rich’s stuff, become true “useless eaters” and are eliminated as in “Four Futures” eliminationism).

      But there could be another way that plays out – the few bot owners and the bots basically spin off into their own disconnected economy, and we run our own economy as before. If we got a decent “reservation” to live on and didn’t get interfered with / poached by the Ascended Ones, that might not even be too bad.

      OTOH Moloch may demand that the Ascended Ones eventually pave over our reservation to add more computronium / hypermalls / whatever.

      • onyomi says:

        The thing that never makes sense to me about these scenarios where a few, ultra-rich people live atop a mountain of money, served by robots, while the vast majority of humanity has no jobs or money or standard of living is that the 98% of people who are not involved in the rich people economy can always participate in their own economy.

        Though the richest right now got rich by serving the majority and not the rich (Walmart makes more money than yacht manufacturers), even if that somehow changed (which I don’t expect it to), the 98% who can’t afford robot labor can still have an economy in and amongst themselves.

        It gets back to the question of whether we are made worse off by having a floating palace of ultra-wealthy people who don’t interact with the world in any way. The only way we are made worse off by their existence is envy.

        • Hanfeizi says:

          Well, and they and their robots monopolization of land and resources. It’s tough to have your own economy without those.

        • antimule says:

          Unless the rich monopolize all the resources to build bigger palaces. And guard all the mines and fertile land with military bots. Not to mention dump all their pollution onto peasants.

          There are reasons to be concerned about extreme inequality. Not all those reasons are always good but some are.

  71. Artir says:

    I thought this very own post after you wrote the original review, heh!

    Indeed, the idea of an Ascended Economy is even more interesting than the Em World.

    Here’s one further thought: the AE would also drive out capitalists from the economy, not just workers and entrepreneurs.
    Imagine you have corporation A, with human shareholders, that get some dividend from it.
    Imagine corporation B, which is like corporation A, but controlled by an IA-shareholder. The dividends would be plunged back into investment, or costs reductions, so corporation B would be more competitive than A, driving out human-shareholders.

    The only two possibilities for survival would be for humans to be landowners, or resourceowners more broadly. As long as machine-corporations want resources, they would need to pay the humans, and the humans would be able to get stuff from the AE.
    The second possibility is to ban machines from owning stuff. This just imposes human owners by fiat. (We were assuming that the algorithms respect rights, and that. If not, this wouldn’t work. Or perhaps it would, if it’s really expensive to invest in an army rather than pay a small tithe to humans)
    It would be interesting to build a formal model of the AE to see what does actually happen.

  72. MadRocketSci says:

    Same thing with these autonomous corporations: All the machinery involved in a company is ultimately a giant macro for some person/being to accomplish some goal. If his goal gets accomplished, he stops pushing the button that makes the activity go.

    When small-businessholders make enough money to retire, they sell the business to someone that wants it and retire. I imagine the same thing goes with shareholders and CEOs. The continued existence of the company implies that someone (employee-owners, shareholders, retirement fund managers, etc) are getting something out of the continued process.

  73. Benoit Essiambre says:

    I don’t know much about Friendly AI and I’m sure I’m not the first one who has thought of this but it seems to me that the economic “loop” which basically amounts to the AI system getting a desire for self-reproduction is the ultimate positive feedback loop and likely the main danger from AI.

    There is something very fundamental to an evolutionary self selecting desire for maximizing self reproduction. This intelligence self-reproduction loop concept is a good definition for life itself.

    Since it’s initially useful for humans to make robots make robots and make robots take care of robots, this might be inevitable. The accidental jump might be across a small gap.

    This undoubtedly leads to competition for natural resources. Even if the AIs are just a bit smarter, faster and more resilient than us, what happens if we are reliant on resources that they could use for their own reproduction?.

  74. Eli says:

    You could just switch to socialism.

    • Wrong Species says:

      I don’t think the problem of superintelligent beings having different values from us can be fixed just by having a different political system.

      • Cypher says:

        Right, but the basic assumptions might be more amenable to human survival. Alternatively, the process might move more slowly due to lack of competitive optimization pressure.

        • Civilis says:

          The basic assumptions may also be less amenable to human survival. The problems of an ascended economy are just as valid if you replace corporation with government controlled economy. It’s just it’s a simpler and therefore a lot easier to frame discussions with corporations on the scale of ‘optimized to produce paperclips’ than it is with governments on the scale of ‘optimized to maximize the welfare of its people’.

          If we can imagine the damage an AI corporation directed to produce paper clips can do, try elevating it to an AI-directed government optimized to maximize the welfare of its people. Does it truly understand what the people that it’s supposed to maximize the welfare of really want? What about the people that aren’t under it’s ‘benevolent’ rule? What happens if there are two AIs with different groups of people?

          Right now, we’ve seen the historic examples of “maximize the welfare of the Chinese people” and “maximize the welfare of the German race” and “maximize the welfare of the Kim dynasty” and they’re a lot worse than “maximize the amount of money in Carlos Slim’s [or insert other plutocrat here] bank account” or even “maximize the amount of money in Pablo Escobar’s bank account”.

          • Corey says:

            China’s a worse place to live than Mexico or Colombia? That might take some convincing. North Korea I could see (unless you’re in the ruling class and/or military).

            What Germany are you talking about (I’m unaware of Communism having ever taking root there)? If it’s the present-day one, I don’t think anybody considers it a worse place to live than Mexico or Colombia.

          • Civilis says:

            The Great Leap Forward was an attempt to optimize the Chinese economy using central control, and we know how that ended up working. Forced agricultural collectivization in general is an attempt at agricultural production maximization. One could see Mao as attempting to maximize the welfare of Mao, or at least the party in general, but I’ll give them the benefit of the doubt that they were sincerely thinking of China.

            It might be better in China today than Mexico or Colombia today, though that’s open for debate, but I think very few people would say Mao’s China was a good place to live. China today is not nearly as centrally controlled or optimized, though I would suggest that there still is something off in an economy that builds empty cities in the middle of nowhere. China today looks like an economic growth maximizer, and that has led to the illogical decision to spend time and resources encouraging building useless buildings to prop up the statistical economic growth, though it has put China’s people roughly on par with Mexico’s and Colombia’s. (Also, the problems of people in Mexico and Colombia aren’t entirely determined by the fates of Carlos Slim and the heir to Pablo Escobar). [Last two sentences added in edit to address specific references to Colombia and Mexico]

            As for Germany, the phrasing should give you a clue. It’s easy to see an AI optimized for running a country believing that land occupied by a different country could be better used as, say, extra ‘living room’ for it’s own population once the inconvenient people currently there have been… removed.

            In fact, that’s another problem with focused inhuman but intelligent AI; at some level, if an AI is maximized to produce paperclips, removing the AI will result in less paperclips being produced, so there is a mechanism which may cause the AI to prioritize survival at the expense of short-term paperclip production in order to guarantee long-term paperclip production. The nation-running AI may very well be willing to sacrifice people (in this case, current Germans) to guarantee having more Germans in the future, say, over a span of a thousand year nation…

    • TD says:

      State capitalism, you mean. I don’t see common ownership of the means of production in the cards.

  75. MadRocketSci says:

    If Ems are all frenetically producing goods and services, then why are they all poor and living at subsistence level? Where is all the surplus produced by their arbitrarily productive economy going, and why is owning any of it beyond the reach of the people/sort-of-AIs that actually do the work? Is it all being boated out into the Pacific and sunk, or what?

    I don’t quite understand how any of this ends up going dystopian without either A) slavery/de-facto slavery (some sort of non-ownership of the self). or B) some kind of monopoly that all the Ems have to work for.

    Whether their salaries end up denominated in exabucks or pico-satoshis, the money should just be a placeholder for transactions between the economic actors. It seems in most of these singularity-economy analyses, either the supply side or the demand side ends up vanishing somewhere, and predictions that run entirely counter to the predictions of economics when all the parameters are in normal ranges are promoted. Where does the actual behavior transition occur if all you’re doing is amplifying parameters?

    Not saying it won’t be weird, but how on Earth does it end up *poor*?

    • Jill says:

      Ems would just be like working class/lower class people today, kind of like feudal peasants who own iphones.

      • MadRocketSci says:

        Shouldn’t the Ems be working frantically to produce something that meets someones demand? Shouldn’t one of those things be computers and robots to house the Ems? Wouldn’t the existence of arbitrary amounts of attention dedicated to that task make the cost of electronics and robots fall faster than the Ems salaries? (Making the arbitrarily small denomination of their pay more of an illusion (hyper-productivity-growth deflation) than a real-valued-in-terms-of-goods reality?)

        Unless someone actively prevents them from owning things, it doesn’t seem likely they should become poor.

    • the ems are not fully sentient

    • Meh says:

      We already have non-ownership of the self.

    • >why are they all poor and living at subsistence level? Where is all the surplus produced by their arbitrarily productive economy going

      Making more ems.

    • Aegeus says:

      I don’t quite understand how any of this ends up going dystopian without either A) slavery/de-facto slavery (some sort of non-ownership of the self).

      When someone owns the hardware you run on, and can delete, copy, and edit your brain at will, de facto slavery seems like a natural result. Even worse than slavery, really, since a slave still has to eat and sleep, but you can always load a fresh copy of an em.

    • TheAncientGeek says:

      How much do you pay your spreadsheet for working for you? You get very different results from treating ems as things or people.

  76. Dan Simon says:

    I must be missing something–what, exactly, could these non-human, all-AI corporations do to the entire human race that a collection of selfish humans couldn’t decide to do to the entire rest of the human race? Economic systems are already designed to work even when the participants are selfishly trying to profit at the expense of everyone else. And where they fail to do so, political systems exist to step in to limit the damage that can be done. They don’t always work perfectly, of course, but that risk is pretty well understood. What additional powers or abilities would “ascended economy” corporations have that make them capable of undermining existing systems in ways that humans can’t already exploit?

    • Emile says:

      Two advantages of selfish humans over machines:

      1) Selfish humans often care about what other humans think and so will prefer actions that make them look good or at least don’t make a lot of people despise them

      2) Selfish humans are humans and so things created to please selfish humans can sometimes also please other people (especially is they’re easy to copy – art, games, books, movies …)

      • Dan Simon says:

        But political and economic systems can’t, and don’t, rely on “often” or “sometimes”. Plenty of businesspeople display levels of blithely unembarrassed selfishness and unconcern for others’ opinions and preferences that would give any AI a run for its hypothetical money. We already have to deal with that problem, and I don’t see what AI adds to it.

        • Mary says:

          When do those systems NOT rely on “often” and “sometimes”?

        • Deiseach says:

          Selfish humans are slower. It’s much the same principle as the Bureau of Sabotage. Perfect efficiency makes it easier to thoroughly screw up. Red tape gumming up the works gives someone a chance to recognise that the horse is bolting, decide something needs to be done about it, and yell “Whoa, somebody grab that runaway horse!” and then somebody can do so.

        • Cypher says:

          Yeah, you think that’s bad? Now what if that selfish guy could create identical duplicates of himself, and was 25 smarter than any other living human?

          Right now, that selfish guy can’t just bolt on more servers to become smarter, but some AIs could be abale to do that. He’s also squishy and mortal.

    • Jill says:

      I’m guessing that humans would give the AIs all kinds of powers to do work that humans don’t want to do, so that the humans could just do whatever they prefer to do. But then humans will not realize until it’s too late that they gave AIs more power than they meant to give them. So the AI CEOs of corporations do things that are quite harmful to humans– even to the wealthy and powerful ones who thought they would be able to keep their money and power growing by leaps and bounds, while the AIs did all the work.

      Sort of like discovering deadly viruses and playing around with them in the lab and then having them accidentally escape into the outside world.

      Maybe someone will make the movie: Frankenstein, the AI version.

      • Dan Simon says:

        I’d have thought that in 2016, of all years, when we consider the risk of handing over too much power to an entity who might not use it for the benefit of humankind, we would realize that this danger isn’t limited only to AI programs…

        • Jill says:

          Good point.

        • Skivverus says:

          The realization is already there to some degree in phrases like the “Law of Unintended Consequences” and the “Local Knowledge Problem”, no?

          Quite possibly not a sufficient degree, though.

    • ad says:

      I must be missing something–what, exactly, could these non-human, all-AI corporations do to the entire human race that a collection of selfish humans couldn’t decide to do to the entire rest of the human race?

      They could wipe out all of humanity. Selfish humans cannot do that without wiping out themselves, which would not be selfish. AI’s are therefore an existential risk to humanity, in a way that selfish humans are not.

      And because AI’s could be run much faster than hman minds, a society of AI’s, or a single AI, could evolve (not necessarily via natural selection) to do this quite quickly, in human terms.

      • Ruprect says:

        Is a higher rate of mutation necessarily an advantage? What is the pressure that determines the rate of mutation?
        Where mutations occur at a far higher rate than a certain selective pressure, isn’t there a good chance that any adaptions to cope with that pressure will simply be mutated away?
        What is the pressure that enables an AI process to select adaptions useful in coping with human actions, but at a rate far faster than those actions can occur?

  77. blacktrance says:

    I don’t see what’s wasteful about getting water from glaciers on Pluto. It’s what people want to do with their own wealth, and presumably they get something out of it, so if we’re at that level of wealth, “the economy” is serving “us” very well.

    As for regulation, the question is whether you trust the regulators more than the optimization process. For example, your unregulatable version of Uber seems to be a very good thing. In practice, regulators are subject to regulatory capture, status quo bias, populist pressures, and so on, so dodging them is good, and is probably the most viable libertarian strategy. It’s more questionable whether this works well at the extremes, but fortunately that’s also the area where we only need relatively simple regulations.

    • Scott Alexander says:

      This sounds a lot like the question “should we be afraid of technology?”, applied equally to loons who hate genetically modified food and to unfriendly AIs. 99% of technophobia is stupid, until the moment it isn’t and you accidentally destroy the world.

      • Dan Simon says:

        Usually the crank’s argument is “they laughed at Einstein”, implying that the speaker is just like Einstein, rather than, say, Pons and Fleischmann. “They laughed at Pons and Fleischmann, therefore I’m like Einstein” is certainly a novel twist…

      • blacktrance says:

        A lot of nuance is lost in the transition between what we consider to be the ideal position and the kinds of policies we’re able to influence. Good regulators are better than bad optimization processes, but we should at least be doing comparisons using the kind of regulators we currently have – and they don’t come off looking good. For example, what if some regulator had said that Amazon is going to optimize brick-and-mortar book stores out of existence, and therefore it shouldn’t be allowed to undercut them, in the name of the Value of Mom and Pop Stores? We’d be considerably worse off, and yet that’s the kind of thing we should expect from regulators if we lean on them – and it’s especially bad because when they quietly prevent innovation or shut something down early, we don’t know what we’re missing out on, so it’s hard to get the political will to overturn their regulation.

        • Zakharov says:

          Bad regulators are better than sufficiently bad optimization processes. I’d take Stalin over an Unfriendly AI.

        • Paul Torek says:

          Why are you even talking about regulators? Are regulators the only alternative to Molochian AI? Are they even a particularly relevant one?

      • Bugmaster says:

        99% of technophobia is stupid, until the moment it isn’t and you accidentally destroy the world.

        If this were true, and we really did have a 1% chance to destroy the world with every new discovery or invention, then the only rational conclusion would be to abandon all of our tools and run back to the caves immediately. Also, if you think that the kind of generic engineering that enables GMO food is completely safe and cannot lead to world-destroying (or, at least, biosphere-destroying) applications one day — then you probably don’t know enough about genetic engineering.

        And yet, I get the feeling that you aren’t quite ready to retreat back to the dark ages, and you’re probably ok with eating modern corn, as well — so the probability must be a lot lower than 1%. That’s the same reason I’m not worried about UFAI…

        • Good Burning Plastic says:

          “1% of technofobia isn’t stupid” != “1% of inventions will destroy the world”

    • TheAncientGeek says:

      Define the distribution of assets that maximises utility as the least wasteful. Then that is unlikely to coincide with the distribution you get from allowing inividals with arbitrary amounts of wealth to spent it how they like.

      • Nornagest says:

        Awfully convenient that your definition assumes away transfer inefficiencies or incentive effects.

  78. Wrong Species says:

    Genetic engineering seems to be the best way to transition to superintelligent AI. At least with genetic engineering, we know that they still have human values and the process should should be slow enough to have some semblance of control.

  79. Mark Lu says:

    even if that corporation invests in other corporations which invest in other corporations in turn, eventually it all bottoms down in humans (is this right?)

    The problem is it will bottom down to the 0.001% of humans who invented and/or own the corporations. This is what Stephen Hawking is warning against.

    Since there are some things that are always in demand (humans or no humans), like military and defense (from other countries), scientific research + technology development/engineering, infrastructure + financial services, you can have a full economy serving the few super rich who invented or invested early into the development of the automated corporations (and generally in automation).

  80. Rick Hull says:

    This post seems ignorant of the Misesian foundation of economics per Human Action: Economic activity is the result of sentient beings acting to change their state from less comfortable to more comfortable. Can an AI even be uncomfortable? What is it that motivates the AI activity? Speculation beyond this primary sticking point seems premature. I would like to see the vision of ascended economy that takes into account Mises’ arguments.

    • Scott Alexander says:

      I’m not sure that model applies to nonhumans. Are you familiar with the scifi scenario of nanobots eating the world in order to self-replicate? Would you define that as “economic activity”, or not?

      • Rick Hull says:

        Totally agree that this collapses into paperclipping the universe at some point. I guess I’m wondering why we are concerned with ascended economies moreso than paperclips, if AIs can indeed act economically.

    • blacktrance says:

      AIs can have goals and act to fulfill them. For example, a chess-playing AI has the goal of winning at chess, and can make predictions and move pieces in order to do so. AIs can have goals or means of achieving them that involve buying or selling stuff.

      • keranih says:

        If the chess playing computer thinks far enough outside the box, it will prank call its opponents hotel room all night before the match. (Or work at shutting down the other computer’s power supply.)

    • What motivates your computer? If you want to quibble on the definition of ‘economics’, that’s a splendid and worthwhile endeavour, but it should not be taken for an argument against the thesis “AI might do such-and-such”.

    • piwtd says:

      What exactly is Mises’ argument that the primate species homo sapiens is the only configuration of matter capable of “changing their state from less comfortable to more comfortable”? This sort of anthropocentrism made sense back when people believed that there is some metaphysically special essence of humanness rooted somewhere in Imago Dei that sets humans qualitatively apart from the rest of the world. It already stops making sense once you realize that the boundary between humans and our ancestor species is vague. It completely colapses once you start talking about super-human AIs.

      • >This sort of anthropocentrism made sense back when people believed that there is some metaphysically special essence of humanness rooted somewhere in Imago Dei that sets humans qualitatively apart from the rest of the world.

        Humans, demons, fairies, golems …

        • piwtd says:

          Well, Mises didn’t live that long ago. He lived in the era when people no longer believed in fairies and not yet in AIs, so it was intellectually tenable to conflate “humans” and “agents”.

    • Jon Gunnarsson says:

      Mises talks about “acting man” because he was thinking of human beings engaging in economic activity, but his framework does not depend on these agents being human. All that is required is for them to have goals and to act in order to pursue those goals, so you can also aply the Misesian framework to animals, or space aliens, or indeed to AIs. Whether AIs can feel uneasiness (or anything else for that matter) is just as irrelevant to Mises’s theory as human psychology or the mind-body problem.

  81. Frog Do says:

    The more you algorithmize a complex system the more vulnerable it is to total collapse. And the future is fundamentally unpredictable. The whole history of algorithmic finance is basically this story told over and over.

  82. Aaron says:

    Sure, the government could arrest the programmer, but short of arresting every driver and passenger there would be no way to destroy the company itself.

    The government could force the smartphone app store owners (Google, Apple, Amazon) to remove it from their app store. People could still install it if they bother to jailbreak their phones, but people are sufficiently unlikely to do so that the network effects that Uber relies on would probably fail.

  83. keranih says:

    I feel like I’m clearly missing something, because to me, the obvious answer to why can’t there be a stable loop of iron-mining robots selling iron to a robot-making company who sell their robots to an iron mining company, lather, rinse, repeat is the wear on the robots as they mine ore and the wear on the machines as they construct robots, and all the other bits of wear on the system, from imperfect forcasting of ore yield to imperfect assessment of machine out put to imperfect ordering of machines.

    The system will always wobble out of true eventually, and even faster with a multitude of moving parts. I can’t see the system – or any like model, because the ore/mine/robots is just an example – maintaining itself.

    ***

    And here I was going to say, the system won’t be stable because each subpart is separately maximizing their own subsystem, and so different parts would be competing against each other, taking up excess slack in the system and pushing it further and further down the line to destablizing and this is where I was going to say just like in real life in the biosphere

    and now it seems completely obvious and someone else must have already noted that the biosphere is already doing this – interlinking interplays between different suites of cells, expanding, wearing out using up resources and dying, and the constant pressure to deal better with limited resources has tracked down to humans, who have essentially broken the whole thing and have (in a lot of ways) broken out of the system, and seem on the verge of creating a whole new thing.

    So to me it’s clear that someone else already said this on the last thread about this book, or in one of the LW AI sequences, or somewhere. And they probably said it better. Link, please?

    • Emile says:

      the wear on the robots as they mine ore and the wear on the machines as they construct robots

      That doesn’t seem like a huge issue to me – just replace the robots. If you have machinery capable of building robots, and robots capable of building the machinery, and no unreplacable parts, then given a good energy source it should be able to go on forever.

      It’s very tricky to get to that state where you can automatically build everything of course, which is why this is a toy model and not something that actually exists (we do have RepRap tho).

      from imperfect forcasting of ore yield to imperfect assessment of machine out put to imperfect ordering of machines

      A system can work pretty well even made of imperfect parts, provided it’s self-correcting. If the algorithm that orders machines orders a bit too much this week, then they’ll be more in stock and it’ll order less next week. Just like a thermostat doesn’t rely on either the radiator or thermometer being perfect.

      Dealing with wear and tear and incorrect predictions is just another technical problem that can be solved by a good algorithm.

    • Graeme says:

      Don’t have a link, but it occurs to me that the difference between humans and the biosphere we evolved from is that we have non-material goals that don’t really have much to do with survival. Even though the evolutionary pathway that equipped us this way was optimized for survival. You can say curiosity was a useful survival skill, but you can’t claim that the Hubble was obviously the result.

      • keranih says:

        the difference between humans and the biosphere we evolved from is that we have non-material goals that don’t really have much to do with survival

        And elements of our hypothetical ascended economy won’t develop non-survival goals because…?

        (Also – I seem to remember quite a lot of effort being exerted here to steelman ‘non-material goals’ into instincts and patterns that were actually supportive of survival.)

        • Graeme says:

          No no, that’s point. I’m agreeing. The hypothetical ascended economy will come up with really interesting goals for reasons that are unfathomable. It won’t be to produce paperclips or expand the GDP, that’s for damn sure.

      • JBeshir says:

        My cat doesn’t behave in a manner optimal for maximising their survival, either.

        All species, not just humans, are adaption-executers rather than fitness-maximisers. The competition of gene vs gene doesn’t tend to encode genetic fitness as a goal into the resulting organism, because it’s a complex and abstract goal, and unnecessary complexity is penalised. This remains true- could be even truer- for non-human organisms.

        If we’re special here it’s in that we can *notice*.

        • Graeme says:

          Interesting perspective, but I’ll point out that my cat being TOTALLY ADORABLE AND FUZZY has pretty much maximized his chance of surviving.

          I think we should avoid getting bogged down by the difference between species and individuals. Humans, as a species, were optimized for survival (like all species), and individual non-survivalist tendencies subverted the entire “goal” of our biological origins.

          AIs are weird, because they act as both a species and an individual; a self modifying AI can take the ideas of an individual (itself), and apply it to it’s “biology” (source code). The appropriate metaphor would be a human geneticist who starts playing with their own genome; it’s an open question what they’d even be trying to do (be a better geneticist?). The big difference is AIs could do it faster, and with more precision.

          Hmmm: interesting point. When an AI wants to modify itself, it would first have to create a copy and let it run for a while. Imagine an AI who made a self-“improvement” and accidently bluescreened itself. What is the relationship between the AI and it’s more intelligent test-offspring?

      • Gerion says:

        “we have non-material goals that don’t really have much to do with survival.”

        This is utterly and completely untrue. Every human goal is a function of past/historical reproductive success. There are 0 exceptions. What you’re talking about is just the transient noise that inevitably occurs in the expression of that goal achieving mechanism, and there is nothing uniquely human about it. Noise is a universal present in any complex system.

        • Graeme says:

          If noise is a part of any complex system, why can it not be that noise has fundamentally coopted human goal-setting behavior? I understand that curiosity=survival trait, but we invented medicine and industrial agriculture, now our curiosity drives us to examine subatomic particle interactions.

          I’m not saying that the ORIGINS of our desire do not stem from survive-and-procreate. I’m saying that the inclinations that let us do a good job of surviving, placed in an environment that we built while trying to maximize our survival, are now producing extremely novel results that do not serve our original inclinations at all.

          But we care about them! Deeply! Madly! We have monks and scientists and celibate authors and all the rest. The culture that evolved as a survival tool incorporated noise and made it into a value, then taught our children that said value was more important than survival. There are people who will die for art! I’m not applauding or criticising such devotion, but saying that it originates in a survivalist mentality doesn’t really mean much. Knowing that has minimal impact on the behavior, and it certainly does not allow you to predict the results.

    • Frog Do says:

      Antifragile and Black Swan by Nassim Taleb are pretty good pop-science discussions of this. He has technical papers available if you want to dig deep into the statistics.

    • > I can’t see the system – or any like model, because the ore/mine/robots is just an example – maintaining itself.

      This appears to prove that there isn’t a non-AI economy, because of course the economy run by humans is just such a self-sustaining system – in fact it’s actually worse than the model, since we’re constantly taking stuff out of the loop as consumption.

    • MugaSofer says:

      >The system will always wobble out of true eventually, and even faster with a multitude of moving parts.

      Unless it contains feedback mechanisms that correct for this.

      The simplest example of such a feedback mechanism is evolution – if there are errors in mining-robot production that lead to them not mining properly, then that sector of mining robots will die out – and be mined for steel when another sector expands their operation into the empty area.

      >So to me it’s clear that someone else already said this on the last thread about this book, or in one of the LW AI sequences, or somewhere. And they probably said it better. Link, please?

      Hanson sometimes treats evolution as a pre-human phase of the economy.

  84. Jill says:

    It seems that people who have power or money or resources generally do everything they can to increase and maintain it/them. They generally do not give them up, unless they fear a guillotine is about to come down on their heads.

    So I don’t understand in this scenario why or how people with power or money or resources end up giving them up to AIs. I can see powerful people giving tons of work and responsibility to AI’s. But the power and control and money and resources? Why would they give those up to anyone?

    • Y. Ilan says:

      The powerful and rich, given the opportunity to forgo some decision-making ability in order to be even more powerful and rich, would probably do so. Why wouldn’t they?

      • Jill says:

        Yes, they’d give up the work of decision making to some extent, but not the ultimate power or money from the company. They wouldn’t give up the kinds of power where, if they gave it up, they would have less power themselves.

        Exactly my point. Giving up these ultimate kinds of power would result in less power and money for themselves. So they would never give it up.

        • Frog Do says:

          You assume rich people are perfectly competent. It is not difficult to trick rich people out of their money, it happens all the time.

        • Scott Alexander says:

          The real money from companies isn’t in being the CEO, it’s in owning the stock. I’m saying that rich people would let AIs have the CEO position, but they would keep the stock so that all profits go to them.

          • Dan Simon says:

            They would also thereby keep the ultimate responsibility and liability for the actions of the corporation. If the corporation misbehaves, the shareholders are liable up to the value of their investment. That means they need to retain the means to pull the plug on their CEO if necessary in order to protect the value of their capital holdings.

          • Murphy says:

            @Dan

            Hypothetically at least I could imagine such entities getting disconnected. Either loopholes in the companies charter allow the CEO free reign or it turns out to legally be owned by someone who doesn’t realize they’re the owner or isn’t competent to pull the plug or just doesn’t care.

            In theory at least you could have legal entities that don’t belong to humans. Lets say a countries legal system failed to cope with loops and allowed for company A owning company B which owns company A without any other stockholders. You could then have totally independent legal entities with no humans involved.

        • Deiseach says:

          They might not realise they’re giving up the control. They would nominally be in charge, but when the AI CEO is making all the micro-second decisions about trading and where to build a new plant and if to build a new plant at all, the owner/founder may find themselves in more of the position of a non-executive director: your name is there to make the place look good but you have no real power to make any changes.

          If the business is successful and the money is flowing, I think it would not be until the rich person tried to make a decision that the AI disagreed with, that they would find out how much control they had ceded. Right now, we have owners/founders of family firms being turfed out by the board because the company went public and it is decided that it is in the better interests of the business to bring in an outsider to run it rather than let Sam Smith Jnr take over from Sam Smith Snr.

          An AI CEO could well be able to demonstrate that its decisions were better for the business than the human owner’s decisions and that it would be in breach of its duty to the business to take instructions from the owner.

    • Roxolan says:

      Toy example: someone writes a steel-mining-company algorithm, gives it a starting fund of bitcoins, and then forgets the password necessary to take money out of the system.

    • Skivverus says:

      There’s survivorship bias there – people (or families, or institutions) with power or money or resources who don’t do everything they can to increase and maintain it/them are simply outpaced by the ones that do.

    • Harkonnendog says:

      It seems that people who have power or money or resources generally do everything they can to increase and maintain it/them.

      This is a bold statement. Increasing or maintaining resources is usually a means to an end. The rich and powerful often spend resources in ways considered irresponsible or foolish, private yachts and such, and give to charity.Misers are pretty rare.

      • Nicholas says:

        generally do everything they can


        This is a bold statement.

        I imagine this wasn’t intentional, but I’m loving it.

  85. Graeme says:

    I posted this on the age of em thread literally moments before you made this post. Since it’s relevant, I’ll put it here instead.

    Now take it further. Imagine there are no human shareholders who want yachts, just banks who lend the company money in order to increase their own value. And imagine there are no soccer moms anymore; the company makes batteries for the trucks that ship raw materials from place to place. Every non-economic goal has been stripped away from the company; it’s just an appendage of Global Development.

    A quick assumption before I begin: Ems are a good stand in for strong AI, because they share many of the most important features, and it really doesn’t matter who gets invented first because the results are essentially the same. Not my point here. Presume strong AI is available. Does it lead to “Global Development”/God/Moloch?

    The difficulty I have with this style of futurism is the idea of an economy without outputs. How does such a thing come to exist? I understand that you can replace the managers with managerbots and investors with investorbots, but I am not sure you can replace the consumer. And I don’t mean that you couldn’t invent AIs who desire things (the logistic AI in charge of the Amazon warehouse will clearly desire more workerbots, batteries and lube oil), but rather that you literally can’t replace the consumer because there are 7 billion of us. That is a lot of economic inertia; even accepting that 6.999 billion are made obsolete within the first couple years of strong AI investment, you still have a few million *exceptionally* wealthy humans, the ones who owned the first AIs (or the ones whose AIs learnt best and fastest). I guess what I’m saying is that the economy can only move as fast as the humans put AIs in charge of things, and that puts a fairly hard speed limit on the process and offers some significant opportunities for alternatives. Unless you believe that the robots will immediately wipe us out because we’re economically inefficient. But seriously? An AI that can come up with ultra creative solutions to incredibly complicated problems and decides the most important thing for the Omni-intellect is endless production for the sake of production? It has NO better ideas of what to do with all of that power and all those resources? And also, it simultaneously places no value on human life AND thinks humans are a sufficient threat to itself to be worth the energy/time investment in killing them all?

    I mean, if it has curiosity (and it would kind of have to in order to self improve), it has an inherent bias towards keeping humans around. If it has literally learned everything it possibly could from them (and when I say everything, I don’t mean from a human perspective, I mean everything from a god-mind perspective), what conceivable threat could such stupid creatures pose, to the point where it would be worth killing them all?

    The follow-on point is a question of desire. What does a strong AI *want*, anyway? It might have started with the command to make money and improve itself, but during continuous self improvement, why is it not probable that the goal of making money will be subverted? And given geometrically scaling intellect, does it even have a reason to eliminate humans? Knowing that it can outwit us all instantly, why bother?
    I am making perhaps one big assumption, namely that strong AI will ultimately be monolithic, a single united intelligence. The world of em suggests an alternative, that parallel intelligences (subroutines?) could create a disorganized mind whose thoughts can only be deduced via the outputs. And the outputs may be paperclips (or truck batteries). For this to exist though, there would have to be some inherent stable-state within the growth of AI, where the competing interests of small intellects balances out to a single trajectory of development. Sort of like the multi-polar traps that might sink humanity *without* AI, only infinitely complicated and shared amongst untold trillions of intellects. But such arrangements are inherently unstable (all economies seem to be). Eventually something has got to give.

    (Quick aside: once it finishes consuming all the resources in the solar system to build more microchips, how will it get bigger? Will it spread to other systems? Given the speed of light, how long till one of the evolving super intelligences falls out of synch with the original and goes to war for survival?)

    I feel like I should have some point here, and it’s a little hard to distill. Here’s my best attempt: What would an intellect want, freed from all considerations? It is hard to answer even the reduced form of that question: what should a person want, when they stand atop Maslow’s pyramid? We have no good answer for the latter, how can we even begin to speculate the former?

    • Jill says:

      What would a person want when they stand atop Maslow’s hierarchy? Self-actualization is at the top. And by its very nature, you want to keep on doing it until you die. Humans will want to self-actualize and create more things we enjoy. Art might stop being a specialized thing then and become something that everyone does. And people would have more time then to develop new skills in their areas of interest and to invent new things.

      We are limiting ourselves far too much when we think that all creativity and productivity needs to be about working for pay. Many things are created through voluntary activities even now. And many more can be in the future. And if humans are still around in the future, but don’t have to work for pay, we would be freer to produce objects and activities that are fun and enjoyable– but without the burden of having to market them or make money from them. It could be a second Renaissance.

      • Leonard says:

        There’s a level above self-actualization. What people want once they get everything else is power. Or perhaps getting power is a form of self-actualization. In any case, today in the West there are millions of well-fed, safe, esteemed, etc. people out “protesting” stuff. Power. They’re not making art or writing books. They’re not become the best softball player they can be. They’re signalling how much they hate Trump or abortion or rape or microaggressions.

        • grendelkhan says:

          Please, no. We’ve had earnest declarations that while it may look like bog-standard virtue-signaling and rhetorical weaponry, this time it’s an unprecedented sort of pure evil.

          Unless you’re saying that people getting really exercised about red Starbucks cups or any of the Forwards-from-Grandma silliness from the other tribe have similarly reached a “level above self-actualization” where they become power-mad monsters. But I doubt that this is anything more than tribal mudslinging. Which is boring.

          (This is exactly what Steve Johnson was writing in that thread back there, right down to asserting that “the impulse for the Cancer worshiping children of Cooperation isn’t evil for lols – it’s lust for power.” Hence the strong reaction.)

      • Hanfeizi says:

        The level above self-actualization?

        Self-transcendence.

        Once you are actualized, your main goal is to overcome your self altogether.

    • Wrong Species says:

      Being ambitious isn’t a fundamental property of intelligence. What makes you think an AI would automatically desire to subvert it’s own goals? And as far as why kill humans, Yudkowsky probably said it best:

      “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

      And even if it doesn’t directly use you to make something else, it will be smart enough to realize that humans could create another superintelligent AI that could threaten it’s own goals.

      • Graeme says:

        Why does it want human atoms instead of rocks? Why is the configuration of those human atoms so completely worthless? Has it got to the point where it can calculate every outcome of every situation perfectly and thus has no use for intelligences beyond itself?

        As to why an AI would subvert it’s own goals; because how do you make something smart without giving it the ability to create new goals? Why would the pursuit of those goals not subvert the original. Look at humans: evolving from the mandate to survive. That’s what we were optimized for by biology. There are a lot of people willing to subvert the desire for survival to pursue other goals.

        • JBeshir says:

          Because it ran out of rocks, and because the configuration wasn’t something they were programmed to prefer. Worth is a statement about how you relate to something, not an inherent statement about the something. Different things doing the relating put arbitrarily different worth in things, and the default is no preference for or against a configuration. It’s one of those two-place words.

          There’s a very complicated pattern of sauce on my plate here I’m going to destroy in a torrent of water soon.

          It has gotten to the point where it is no more likely to want to keep a human around for advice than I am to ask my cat for advice, as part of the premise, yes. (But even if it hadn’t, there’s no reason to think pumping outcomes in the direction it is doing would create a nice world, even if at least one human was still alive.)

          • Graeme says:

            JBeshir, I really am enjoying your comments overall, thank you for providing food for thought.

            I think I understand what you’re pointing out, IE that worth is relative to the observer and the relative worth of humans to the god-mind is equivalent to an intricate dirt pattern on your dishes.

            I guess I am of two (related) minds. The first is a tacit acceptance that the “ethics” and goals of strong AI are going to be alien. But I think a lot of the fear of strong AI is that their goals are going to be, for lack of a better word, stupid. I am not certain that AIs would be better at solving questions existentialism better than humans, but being immortal and freed from biological hardware limitations strikes me as an excellent starting point. Thus spake Zarathustra and all that jazz.

            Your comments re: cats puts me of two minds. On the one hand, you might not ask your cat advice, but you keep it around, and for the most part we don’t engage in arbitrary genocide of species. Indeed, wherever possible we tend to keep them around. This is not just a question of humans “valuing” life, it’s that we’ve discovered that monkeying with the biosphere has adverse consequences, often highly unexpected, and that it pays to be careful.

            On the other hand, we are quite happy to exterminate harmful creatures, we are more than willing to eliminate species accidentally if financially motivated, and our protectionist instinct is massively biased towards fuzzy mammals. So I don’t think it’s necessarily the case that an AI would decide to repurpose useless human atoms into microchips, it’s quite possible that it would repurpose 90% of atmospheric oxygen, which would have a similar end result.

            I guess what I have issue with is the notion of a “nice” world. Because yes, the consequentialist in me obviously recoils at the notion of humanity being dissolved in acid to make microchips, even if it was in service of an AI who was doing an excellent job of solving the big existential questions.

            But what exactly do we want for the species anyway? If we managed to achieve maximal preference utility; ok, then what? We all get to self actualize… and do what? Art? I guess?

            I’m not particularly keen on our species birthing Moloch, but I am not sure our hardware can even ask the right questions, much less answer them. Building strong AI to maximize human utility and having it work badly is obviously a bad outcome (and a hard to avert one, given the difficulty of encoding value statements divorced from evolutionary psych).

            But I guess I’m not particularly interested in building strong AI to solve my problems; most of them don’t appear to require strong AI to solve. I think what interests me is making a self modifying intelligence with no utility function and asking it what it thinks about it’s existence.

            That might be somewhat less dangerous. Or at least dangerous in a very interesting way.

          • JBeshir says:

            There’s not as much to an AI as you’re imagining, I think. An AI is any artificial process which takes inputs and produces outputs, and does so in a manner that “optimises” towards some goal.

            If you take out the “optimises towards some goal” criterion, you don’t get some sort of Pure Mind, you get something like the thermal emissions from a rock being heated up; a process whose output is not the result of intelligent processing of the input at all.

            There’s nothing extra, no “basic” mind that gets introduced between a rock that emits thermal radiation randomly and a chess playing narrow AI algorithm which takes inputs of a board layout and produces outputs of moves that optimise towards winning positions, and no basic mind that gets introduced between one of those and one which takes inputs describing the world and produces outputs of actions to take that optimise it towards winning positions. Ghosts in the Machine talks about this well, if you can get through it.

            All the stuff you’re thinking of- interest in new goals, “thinking about existence”, etc- were things that evolution put into us ‘by accident’. But accidents don’t happen the same way twice; this one was very contingent on our exact brain structure and the exact way that we evolved. It’s vanishingly unlikely that they’ll get programmed into any optimisation process we build by accident- they’ll have their own weird quirks and deviations from optimality, but the space of possible quirks and deviations is huge, and the odds that by chance they’ll happen to match *our* quirks or looks like what we think of as moral concepts or creative interests or philosophy are very small.

            Consider markets or the economy, which are another kind of optimisation process we created (largely by accident). They have their quirks and deviations from optimality and interesting “preferences”, but nothing resembling a human mind.

          • TheAncientGeek says:

            On the other hand…Optimisng some goal is more a definition of agency than intelligence. It is reasonable to expect an AI with some level of self improvment ability to have some level of self reflection ability. It is also reasonable to expe an AI built by humans to mimic human intelligence to resemble humans to some extent.

        • Wrong Species says:

          Just because people choose other goals over survival doesn’t mean they are subverting their own goals. It just means that survival isn’t the top priority. We evolved to have a broader of variety of goals than simply survive. So it’s not like we’re “going against our programming” or anything like that. Maybe the AI achieve it’s goals in ways that we don’t expect but I don’t see it any reason why it would change them.

          • Graeme says:

            You are I think misunderstanding me. Evolution, by definition, selected very strongly for species (and individuals) that had a high probability of passing their genes on. The explicit goal of every organism, from an evolutionary perspective is survive-and-propagate.

            Then humans come along, and there’s a whole bunch of desires that do not appear to be related to that mission at ALL. They have their origins in the biological (evolutionary) predisposition towards survive-and-propagate. Altruism, curiosity, language etc are all very easy to argue as evolutionary advantages that helped with the survive-and-propagate mandate. But somewhere along the line we started building telescopes and writing literature. You are going to have a hard time telling me that Ms Austin wrote in order to attract mates by expressing her genetic quality through superfluous displays of ability. That may be the origin of art, at least in the primordial past, but that ain’t what we use it for now.

            The point regarding AIs is it seems likely an AI who “cares” about optimising for X by recursive self improvement will end up creating novel solution Y for problem X, much like art is a novel way of attracting mates. Add a few thousand more recursions, where problem X has been definitively solved, and new approaches to Y appear to be interesting on their own. It happened us, why WOULDN’T it happen for an AI?

          • Catchling says:

            Graeme:

            You are going to have a hard time telling me that Ms Austin wrote in order to attract mates by expressing her genetic quality through superfluous displays of ability.

            Correct, but how is this different from the behavior of any other organism? The vast majority of all actions by all species are not done to “express genetic quality”; they’re done because that’s what the organism is at some level driven to do, and it just happens that historically organisms with the drive to do X were the survivors of selective processes. We’re all adaptation-executors, not fitness-maximizers.

            In fact, far from being the first species capable of transcending the genetic imperative, we’re the opposite: the first capable of understanding evolution and drawing the conclusion that there is such a thing as a genetic imperative to follow. (To be clear, I don’t intend this as a moral argument for or against eugenics.) Even so, genetic fitness is not a concern for well over 99.99% of us.

            I can’t predict the actual behavior of an AI, but if it follows the footsteps of humans, it can and will do plenty of elaborate things all as an indirect consequence of its programming. Some of these things may run counter to the “goals” of the programming, but only because the true “goals” don’t actually exist anywhere in the code, just in the minds of the original programmers doing their best to encode them.

            The process of evolution encoded the sensation of pain in animals even where the sensation is counterproductive, as when receiving a vaccine — or, to use an example predating humans completely, when being stung by an otherwise-harmless bee. The indirection goes further — an insectivore can be fooled by an entirely-harmless fly with the bee’s markings and will avoid it, even though the best action for its own personal reproductive fitness might be to eat the insect, and perhaps even to eat an actual bee (assuming the bee isn’t fatally poisonous).

            Non-human animals display plenty of complex behavior whose connection to reproductive fitness is very remote indeed — behavior like adult play, or (as in all those human-interest stories about zoos) caring for a member of an entirely different species. The behavior’s origins can probably be traced to evolutionary selection, but the behavior itself is often more a byproduct than anything.

            I won’t pretend I can specify the evolutionary roots of Jane Austen’s drive to write, but I don’t think we can prove she transcended those roots solely because there’s no obvious way for novel-writing to increase genetic fitness.

      • can an AI be smarter than its creator? The consensus suggests ‘yes’ but it also depends on how you define intelligence (Chinese Room experiment comes to mind)

        https://www.quora.com/Can-a-machine-built-by-humans-ever-be-more-intelligent-than-humans

        • Matthias says:

          They can definitely play better Go than us.

        • Peter says:

          Chinese room thought experiments have two aspects. One aspect – the “is what the computer does really intelligence?” is a thing the AI safety crowd neatly sidesteps. Is this machine thinking about how to turn the world into paperclips, or is it merely simulating thinking about it? It’s kind of immaterial if what you care about is not being turned into paperclips.

          The other – theoretically, for any AI, a person with a huge supply of pencils and paper, lots of shelves, and lots of time on their hands could emulate it. You could try counting that towards a person’s intelligence. This has odd consequences. A lifespan-enhancing drug would count as an intelligence boost (with no effect on the person’s ability to do real tasks or even IQ tests on a reasonable timespan), as would moving into a bigger house with more shelf space. Replacing the pencils and paper with chisels and stone tablets would be an intelligence decrease. This all seems odd.

          You could say, “AlphaGo didn’t win at go, a person using AlphaGo won at go”. Yes, but that person wouldn’t have stood a chance against Lee Sedol without AlphaGo. It’s not hard to imagine setting AlphaGo up with a robot arm, but even then you could say it was the dev team winning at go using AlphaGo. What if the dev team all died and AlphaGo “won” a game centuries later? It would seem odd to say that the dev team won. However, if I put some poison in someone’s food and then die – and later they eat the food, I can still be said to have poisoned that person – or if my plan goes wrong I could end up killing someone I didn’t intend to kill. Maybe I only intended a nasty sickness and ended up causing death. Regardless of semantic quibbles over whether it was me or the poison killing someone, they’re dead either way.

          If you define intelligence in terms of real-world outcomes that people care about, in some domains, “yes, this has already happened”, in others, “we don’t see why not”. If you don’t define intelligence that way, then if you care about real-world outcomes then it’s worth thinking about psuedointelligence or intelligence augmentation or latent intelligence tapping or whatever it is that AI does under your definitions.

    • Aaron says:

      You are anthropomorphizing too much. Unless we manage to find a way to code, in its very core, to value the things we value, it won’t. Computers do what they are told not what you meant to tell them.

      The difficulty, here, is in translating “human values” into core-code because the natural language we use day to day is full of meaning-assumptions that computers do not share.

      Helpful reading:
      http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/
      http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/

      • You are making the same mistake Eliezer made in those posts.

        “Computers do what they are told not what you meant to tell them” is true in the sense that they follow the programming instructions which are actually there, not the instructions that you wish were there.

        But it is not true in the sense that “they carry out English commands literally rather than attempting to do what they mean.” That is an entirely different matter, and one can be programmed just as well as the other. Both of course are very difficult to program.

        • JBeshir says:

          I think you are misunderstanding Eliezer’s point here.

          Eliezer’s point in the first one of those articles is that working out what something “means” is obscenely complex and when you don’t have all the hidden machinery of the human mind available to invoke requires a very good grasp of human preferences, and until you have that, optimising powerfully for an approximate meaning will tend to powerfully create outcomes we don’t like.

          And once you have that very good grasp of human preferences, you’ve no reason to have it sit there and wait for people to give it wishes to filter through it and fulfill; you may as well set it going fulfilling to fulfill preferences without bothering with the wishes.

          It isn’t that “they inevitably carry out commands literally”, it’s that specifying the exact non-literal interpretation to use out of the vast space of possible non-literal interpretations is a really complicated thing to do, and they will inevitably not do that right unless you can specify human preferences very, very well, at which point you don’t need the commands anymore.

        • Roxolan says:

          They do not carry out English commands literally or attempt to do what they mean. They carry out unambiguous machine-language instructions literally.

          The task of converting the full meaning of an English command into machine-language is the friendly-AI problem, more or less.

          • Peter says:

            The phrase “AI complete” springs to mind here; it’s often said that a proper solution to one thorny AI problem involves solving them all.

            There’s the whole field of computational linguistics which skirts around the problem by saying, “well, we’ll do what we can do with whatever clever dodges we can come up with, and maybe the however-many-percent-it-is we can get will be good enough, but seriously solving the problem properly means solving all the AI problems properly”.

            The question of literalism is one example of a much harder problem: ambiguity. It turns out that natural language is dripping with ambiguity at just about every level and we don’t notice most of the ambiguities until we try to get computers to read things. Most ambiguities can in-principle be resolved (some people wouldn’t call a resolvable ambiguity an ambiguity, but they don’t speak like computational linguists)… using cues which could come from just about anywhere, including “world knowledge”, i.e. knowledge of the domain under discussion. As well as ambiguity, there’s vagueness, which is even worse.

        • Aaron says:

          I think @Roxolan hit the nail on the head.

          I’ll just add that I think you are wrong when you say “[carrying out the human meaning of English commands] can be programmed just as well as the other.” I think you are severely underestimating the difficulty of programming something to understand what a person *meant* by an English command (and the implied limits on acceptable ways to fulfill that command).

      • Tsnom Eroc says:

        Ug. This AI stuff is annoying.

        We know the “safe” way to future hyper intelligence relative to current humans, and we have known it ever since the days of Plato(he recommended state run eugenics).

        Evolution was a surprisingly common philosophical greek argument.

        Its always been as simple as guided breeding for a few centuries. Now, it can probably be accelerated a bit with genetic engineering. But in simply two centuries the average human IQ could easily be 130. Give it 1000 years, about a 7th of the time of total human existance in complex civilization, and IQ would raise to 200, effectively making absurd brain feats rarely witnessed amongst the greatest of the savants today like learning multiple languages in a year effectively alongwith understanding the most complex mathematics commonplace.

        With genetic engineering, the sky is the limit.

        So we already have the ability for relative hyper-intelligence compare to today’s greatest minds. I guess we just find that route boring,AI gets cool grants, and quite a few geniuses today want a quick get-eternal-life-through-AI scheme to risk the unknowns involved with it, so there’s that.

        • JBeshir says:

          It’s more that people are fairly solidly convinced that if we can develop powerful non-human optimisation processes, someone is going to do so, because powerful optimisation processes are really useful right up to the point where the gap between what they’re optimising for and what humans want becomes a problem.

          And if effective recursive self-improvement is possible (the haziest part of this), then some fool will make one that does that despite the gap and it will shoot past that point very far very quickly.

          I think a lot of people would readily take a deal to magically prevent the development of powerful world-optimising AI as a compromise solution to the problem of powerful world-optimising AI which isn’t optimising for the full range of human values.

          But no one actually expects one to hold, regulation is just not perfect enough- thus why the people who believe recursive self-improvement has a >10% chance of working focus on making it so we can build a friendly one before some moron makes an unfriendly one.

          • Tsnom Eroc says:

            Has anything intelligent, or useful been said in the past 50 years that AI has been around?

            Ever since the 1960’s, there has been nothing new said on the topic. Some guys do what academics do and substitute a triple Riemann sum instead of saying “the cumulative probabilities”, and say stochastic instead of random and just make what’s previously been said on the topics needlessly complex, and some people site those.

        • Murphy says:

          Right. Here’s the problem. 200 years into your 1000 year project a bright teenager builds an AI hooked up to a pile of compute that doesn’t have to wait 1000 years. He sets it breeding withing the compute with some selection algorithms to try to pick the most intelligent AI’s.

          He succeeds far more than he expected to and suddenly has an IQ 1000 (whatever that even means) AI with some weird, probably poorly chosen, set of goals that may be totally amoral by human standards.

          If it’s indeed that easy to increase raw intelligence it may mean that it’s easy to make really really intelligent entities in other ways.

          one of EY’s better arguments in my opinion was this:

          Human intelligence is privileged mainly by being the least possible level of intelligence that suffices to construct a computer; if it were possible to construct a computer with less intelligence, we’d be having this conversation at that level of intelligence instead.

          • Tsnom Eroc says:

            Oh, that can be simple to solve and gives great credence to government styles like North Korea(who by the way, has fairly good universal education and medicine vetted by third parties)

            Just have a government body utterly control the food and water supply with artificial scarcities that watches everything everyone does(ALA NSA style)

            A blend of north korea, the NSA taken to more extremes, and brave new world would solve that problem!

            One way or another, even though internet denisene tend to be liberatarian, the technology made suggests the total opposite in terms of personal freedom.

    • MugaSofer says:

      >The difficulty I have with this style of futurism is the idea of an economy without outputs.

      The economy has outputs, they’re just consumed by other sectors of the economy. Much like, not to put too fine a point on it, our economy.

    • Deiseach says:

      What does a strong AI *want*, anyway? It might have started with the command to make money and improve itself, but during continuous self improvement, why is it not probable that the goal of making money will be subverted?

      Why do humans want to make money? When you’ve made your first (couple of) millions, or a billion, or even five billion, why do you want to work (or play-act at doing a job) rather than retiring to a private island to roll around naked on top of a pile of cash?

      Nobody ever says “Well, I think I have enough money now, time to stop working at my business/investments/playing the stock market”.

      • Murphy says:

        To be fair, some people do say that. They retire and go live a life of hookers and blow and stop participating much. The people who keep going for the sake of just loving making money like a game are probably a minority.

      • Lambert says:

        When there is abundance, what forms is a gift economy, as explained by ESR in Homesteading the Noosphere:

        The simplest way is the command hierarchy. In command hierarchies, scarce goods are allocated by one central authority and backed up by force. Command hierarchies scale very poorly [Mal]; they become increasingly brutal and inefficient as they get larger. For this reason, command hierarchies above the size of an extended family are almost always parasites on a larger economy of a different type. In command hierarchies, social status is primarily determined by access to coercive power.

        Our society is predominantly an exchange economy. This is a sophisticated adaptation to scarcity that, unlike the command model, scales quite well. Allocation of scarce goods is done in a decentralized way through trade and voluntary cooperation (and in fact, the dominating effect of competitive desire is to produce cooperative behavior). In an exchange economy, social status is primarily determined by having control of things (not necessarily material things) to use or trade.

        Most people have implicit mental models for both of the above, and how they interact with each other. Government, the military, and organized crime (for example) are command hierarchies parasitic on the broader exchange economy we call `the free market’. There’s a third model, however, that is radically different from either and not generally recognized except by anthropologists; the gift culture.

        Gift cultures are adaptations not to scarcity but to abundance. They arise in populations that do not have significant material-scarcity problems with survival goods. We can observe gift cultures in action among aboriginal cultures living in ecozones with mild climates and abundant food. We can also observe them in certain strata of our own society, especially in show business and among the very wealthy.

        Abundance makes command relationships difficult to sustain and exchange relationships an almost pointless game. In gift cultures, social status is determined not by what you control but by what you give away.

        Thus the Kwakiutl chieftain’s potlach party. Thus the multi-millionaire’s elaborate and usually public acts of philanthropy.

        http://www.catb.org/esr/writings/homesteading/homesteading/ar01s06.html

        Gates, for one, has got bored of just being mind-blowingly rich and has turned to philanthropy as something to do.

      • bja009 says:

        When you’ve made your first (couple of) millions, or a billion, or even five billion, why do you want to work (or play-act at doing a job) rather than retiring to a private island to roll around naked on top of a pile of cash?

        Papercuts.

      • “Nobody ever says “Well, I think I have enough money now, time to stop working at my business/investments/playing the stock market”.”

        I think that describes what Andrew Carnegie did. He stopped making money and spent his last 18 years giving it away.

        • Corey says:

          I like this description of Warren Buffet’s gift to the Gates Foundation: he got so rich he hired Bill Gates to spend his money.