Don’t Fear The Filter

There’s been a recent spate of popular interest in the Great Filter theory, but I think it all misses an important point brought up in Robin Hanson’s original 1998 paper on the subject.

The Great Filter, remember, is the horror-genre-adaptation of Fermi’s Paradox. All of our calculations say that, in the infinite vastness of time and space, intelligent aliens should be very common. But we don’t see any of them. We haven’t seen their colossal astro-engineering projects in the night sky. We haven’t heard their messages through SETI. And most important, we haven’t been visited or colonized by them.

This is very strange. Consider that if humankind makes it another thousand years, we’ll probably have started to colonize other star systems. Those star systems will colonize other star systems and so on until we start expanding at nearly the speed of light, colonizing literally everything in sight. After a hundred thousand years or so we’ll have settled a big chunk of the galaxy, assuming we haven’t killed ourselves first or encountered someone else already living there.

But there should be alien civilizations that are a billion years old. Anything that could conceivably be colonized, they should have gotten to back when trilobytes still seemed like superadvanced mutants. But here we are, perfectly nice solar system, lots of any type of resources you could desire, and they’ve never visited. Why not?

Well, the Great Filter. No knows specifically what the Great Filter is, but generally it’s “that thing that blocks planets from growing spacefaring civilizations”. The planet goes some of the way towards a spacefaring civilization, and then stops. The most important thing to remember about the Great Filter is that it is very good at what it does. If even one planet in a billion light-year radius had passed through the Great Filter, we would expect to see its inhabitants everywhere. Since we don’t, we know that whatever it is it’s very thorough.

Various candidates have been proposed, including “it’s really hard for life to come into existence”, “it’s really hard for complex cells to form”, “it’s really hard for animals to evolve intelligent”, and “actually space is full of aliens but they are hiding their existence from us for some reason”.

The articles I linked at the top, especially the first, will go through most of the possibilities. This essay isn’t about proposing new ones. It’s about saying why the old ones won’t work.

The Great Filter is not garden-variety x-risk. A lot of people have seized upon the Great Filter to say that we’re going to destroy ourselves through global warming or nuclear war or destroying the rainforests. This seems wrong to me. Even if human civilization does destroy itself due to global warming – which is a lot further than even very pessimistic environmentalists expect the problem to go – it seems clear we had a chance not to do that. A few politicians voting the other way, we could have passed the Kyoto Protocol. A lot of politicians voting the other way, and we could have come up with a really stable and long-lasting plan to put it off indefinitely. If the gas-powered car had never won out over electric vehicles back in the early 20th century, or nuclear-phobia hadn’t sunk the plan to move away from polluting coal plants, then the problem might never have come up, or at least been much less. And we’re pretty close to being able to colonize Mars right now; if our solar system had a slightly bigger, slightly closer version of Mars, then we could restart human civilization anew there once we destroyed the Earth and maybe go a little easy on the carbon dioxide the next time around.

In other words, there’s no way global warming kills 999,999,999 in every billion civilizations. Maybe it kills 100,000,000. Maybe it kills 900,000,000. But occasionally one manages to make it to space before frying their home planet. That means it can’t be the Great Filter, or else we would have run into the aliens who passed their Kyoto Protocols.

And the same is true of nuclear war or destroying the rainforests.

Unfortunately, almost all the popular articles about the Great Filter miss this point and make their lead-in “DOES THIS SCIENTIFIC PHENOMENON PROVE HUMANITY IS DOOMED?” No. No it doesn’t.

The Great Filter is not Unfriendly AI. Unlike global warming, it may be that we never really had a chance against Unfriendly AI. Even if we do everything right and give MIRI more money than they could ever want and get all of our smartest geniuses working on the problem, maybe the mathematical problems involved are insurmountable. Maybe the most pessimistic of MIRI’s models is true, and AIs are very easy to accidentally bootstrap to unstoppable superintelligence and near-impossible to give a stable value system that makes them compatible with human life. So unlike global warming and nuclear war, this theory meshes well with the low probability of filter escape.

But as this article points out, Unfriendly AI would if anything be even more visible than normal aliens. The best-studied class of Unfriendly AIs are the ones whimsically called “paperclip maximizers” which try to convert the entire universe to a certain state (in the example, paperclips). These would be easily detectable as a sphere of optimized territory expanding at some appreciable fraction of the speed of light. Given that Hubble hasn’t spotted a Paperclip Nebula (or been consumed by one) it looks like no one has created any of this sort of AI either. And while other Unfriendly AIs might be less aggressive than this, it’s hard to imagine an Unfriendly AI that destroys its parent civilization, then sits very quietly doing nothing. It’s even harder to imagine that 999,999,999 out of a billion Unfriendly AIs end up this way.

The Great Filter is not transcendence. Lots of people more enthusiastically propose that the problem isn’t alien species killing themselves, it’s alien species transcending this mortal plane. Once they become sufficiently advanced, they stop being interested in expansion for expansion’s sake. Some of them hang out on their home planet, peacefully cultivating their alien gardens. Others upload themselves to computronium internets, living in virtual reality. Still others become beings of pure energy, doing whatever it is that beings of pure energy do. In any case, they don’t conquer the galaxy or build obvious visible structures.

Which is all nice and well, except what about the Amish aliens? What about the ones who have weird religions telling them that it’s not right to upload their bodies, they have to live in the real world? What about the ones who have crusader religions telling them they have to conquer the galaxy to convert everyone else to their superior way of life? I’m not saying this has to be common. And I know there’s this argument that advanced species would be beyond this kind of thing. But man, it only takes one. I can’t believe that not even one in a billion alien civilizations would have some instinctual preference for galactic conquest for galactic conquest’s own sake. I mean, even if most humans upload themselves, there will be a couple who don’t and who want to go exploring. You’re trying to tell me this model applies to 999,999,999 out of one billion civilizations, and then the very first civilization we test it on, it fails?

The Great Filter is not alien exterminators. It sort of makes sense, from a human point of view. Maybe the first alien species to attain superintelligence was jealous, or just plain jerks, and decided to kill other species before they got the chance to catch up. Knowledgeable people like as Carl Sagan and Stephen Hawking have condemned our reverse-SETI practice of sending messages into space to see who’s out there, because everyone out there may be terrible. On this view, the dominant alien civilization is the Great Filter, killing off everyone else while not leaving a visible footprint themselves.

Although I get the precautionary principle, Sagan et al’s warnings against sending messages seem kind of silly to me. This isn’t a failure to recognize how strong the Great Filter has to be, this is a failure to recognize how powerful a civilization that gets through it can become.

It doesn’t matter one way or the other if we broadcast we’re here. If there are alien superintelligences out there, they know. “Oh, my billion-year-old universe-spanning superintelligence wants to destroy fledgling civilizations, but we just can’t find them! If only they would send very powerful radio broadcasts into space so we could figure out where they are!” No. Just no. If there are alien superintelligences out there, they tagged Earth as potential troublemakers sometime in the Cambrian Era and have been watching us very closely ever since. They know what you had for breakfast this morning and they know what Jesus had for breakfast the morning of the Crucifixion. People worried about accidentally “revealing themselves” to an intergalactic supercivilization are like Sentinel Islanders reluctant to send a message in a bottle lest modern civilization discover their existence – unaware that modern civilization has spy satellites orbiting the planet that can pick out whether or not they shaved that morning.

What about alien exterminators who are okay with weak civilizations, but kill them when they show the first sign of becoming a threat (like inventing fusion power or leaving their home solar system)? Again, you are underestimating billion-year-old universe-spanning superintelligences. Don’t flatter yourself here. You cannot threaten them.

What about alien exterminators who are okay with weak civilizations, but destroy strong civilizations not because they feel threatened, but just for aesthetic reasons? I can’t be certain that’s false, but it seems to me that if they have let us continue existing this long, even though we are made of matter that can be used for something else, that has to be a conscious decision made out of something like morality. And because they’re omnipotent, they have the ability to satisfy all of their (not logically contradictory) goals at once without worrying about tradeoffs. That makes me think that whatever moral impulse has driven them to allow us to survive will probably continue to allow us to survive even if we start annoying them for some reason. When you’re omnipotent, the option of stopping the annoyance without harming anyone is just as easy as stopping the annoyance by making everyone involved suddenly vanish.

Three of these four options – x-risk, Unfriendly AI, and alien exterminators – are very very bad for humanity. I think worry about this badness has been a lot of what’s driven interest in the Great Filter. I also think these are some of the least likely possible explanations, which means we should be less afraid of the Great Filter than is generally believed.

This entry was posted in Uncategorized. Bookmark the permalink.

210 Responses to Don’t Fear The Filter

  1. gunlord500 says:

    From Hanson’s paper:

    Finally, we should remember that the Great Filter is so very large that it is not enough to just find some improbable steps; they must be improbable enough. Even if life only evolves once per galaxy, that still leaves the problem of explaining the rest of the filter: why we haven’t seen an explosion arriving here from any other galaxies in our past universe? And if we can’t find the Great Filter in our past, we’ll have to fear it in our future.

    Well, it’s a pretty big universe, and it’s not as if every single galaxy is close together. Maybe our particular galactic neck of the woods happens to be too far away from any billion-year-old Matryoshka divine God-Galaxies for them to have taken notice of us.

    • von Kalifornen says:

      I do agree that intergalactic distance is vast — possibly too vast for a traditional colony ship. But not for a seed of some form.

  2. Stefan says:

    The laws of physics and their consequences for what is possible aren’t probabilistic. It may just be impossible to get too much organization from one star to another on the time scales that are available. Or it is possible, but the consequence are what it looks like from Earth: a bit patchy and hard to observe from here. Yes, parts of the universe haven’t been transformed, but the process is patchy — we go lucky so far. Just like there are still a few stretches of nature out there on Earth…

  3. Piano says:

    > These would be easily detectable as a sphere of optimized territory expanding at the speed of light.
    If there are such spheres out there, we would only be able to detect them simultaneous with them engulfing us, by virtue of the speed of light. An entire almost-hemisphere of our known universe could be part of such a sphere right now, with it reaching us tomorrow morning, us none the wiser before or after/dead.

    • Scott Alexander says:

      That’s a really good point.

      (on the other hand, them engulfing us would be a VERY detectable form of detection, and the fact that none have done so in a billion years means they’re probably not out there)

      • JTHM says:

        Erm… this seems like a pretty clear case of anthropic bias.

        • F. says:

          If the explanation to Fermi’s paradox is that the ordinary final outcome of evolution is a genocidal lightspeed growing AI sphere (and therefore the only two states in which a biological civilization can be is unaware or dead) then an unconsumed civilization like ours is much more likely to exist in a younger universe, and we should therefore be seeing a younger universe, so this rules it out unless supported by other solution to the paradox such as life or intelligence being very rare.

          Edit: I just noticed a comment below that says the same. I leave my comment here just to show that many people are thinking the same.

      • Shmi Nux says:

        Presumably they are made of massive matter, not light, so they would expand somewhat slower than light. And back when the expansion just started, it was significantly slower than light, but still pretty noticeable (consuming star systems tends to be pretty noticeable). So we’d have had plenty of advanced warnings by now.

        • Coffee Stain says:

          Depends on how far away the event occurs. There is some distance which a civilization identical to ours could be from us and be entirely undetectable. Assuming that the civilization which creates a light sphere is right on that horizon, we would have the amount of time it took for the technology to progress from zero to c as warning before we were consumed. That is, exactly the time they colonize any system closer to us. The civilization doesn’t *create* the light sphere, the civilization *is the light sphere* so it’s going to take them time to speed up.

          However, if they are some substantial distance beyond the undetectability horizon to begin with, they will have some time to accelerate. Their unfriendly AI will slowly expand, but as it does its agents will become more intelligent as they relay information and processing at the speed of light between each other. At some point, if it can get enough intelligence moving quickly enough, the rate of expansion will become close to c. And up until some rate of expansion, they might still be undetectable. After all, energy leaking in all directions of space is wasted energy. We only have the advance warning based on some function of expansion speed and distance. Maybe the anthropic math works out to suggest that the Fermi Paradox implies low values of this function.

        • Xycho says:

          Isn’t the reasonable followup to this an attitude that says “If we spot signs of an alien spacefaring civilisation, start armouring up because that was the edge of the blast wave and we have a couple of centuries before they get to us”?

          Particularly since Saberhagen-style Von Neumann probes would be the most practical way for a sensible near-spacefaring civilisation to prepare the galaxy for colonisation – wipe out anything else that might have a desire for real estate, and then colonise later once ‘manned’ spaceflight is good enough.

          EDIT: Actually, no. Perhaps not everything, just any civilisation with individuals who suffer from akrasia or seem otherwise likely to be difficult to negotiate with. Which they would notice a long time before they spotted any changes we could make, so we’re screwed anyway and in such a scenario would/will end up being punished for not persuading human beings to act as a coherent general utility maximiser fast enough.

    • Nick T says:

      If spheres of paperclips are common, we would expect to exist near the earliest time in the universe when we could have evolved, since older universes are mostly paperclips. But we don’t; sufficient (maybe less favorable, I don’t know, but still sufficient) conditions for intelligent life have existed for billions of years. (I believe there’s a paper by Nick Bostrom and/or Milan Cirkovic expanding this argument, but couldn’t find it with a little searching.)

    • Jess Riedel says:

      This doesn’t really change anything. The fact that we will die shortly after receiving the evidence of alien civilizations doesn’t change the fact that we haven’t gotten the evidence. Our current existence is strong evidence that such aliens don’t exist, just as the clear skies are strong evidence that slightly less aggressive aliens don’t exist. (Anthropic arguments here that try to say death is special do not work.)

      • Piano says:

        I haven’t done any math yet, but it’s plausible that intelligent life did not have the opportunity to fully develop before more than a few billion years ago, and said spheres are all far away/spread out enough not to have hit us yet.

        The observable universe is just under 100 billion light years across.
        Take “few” to mean 5, and ((5*2)^3)/(100^3) = 1/1000. Sphere packing density is ~0.74, so there could be at least 700 civilizations that have gone supernova, within our observable universe (and given no expansion of the universe), who all have no idea any of the others exist.

        • Jess Riedel says:

          The speed of light sets an upper bound on colonization rate, so our observation are a *tighter* restriction on faster growing aliens than slower ones.

    • Crimson Wool says:

      It wouldn’t actually be expanding at the speed of light. Depending on its efficiency protocols, it might be expanding at “just” a substantial fraction of c. But even in the case where it burns all the energy it can get its hands on to make itself go as fast as possible, it’s impossible for an object with mass to move at light speed, and on a sufficiently large scale we would get to see it before we got wiped.

      • Piano says:

        Depending on the speed and scale, probably a few days’ warning. That doesn’t really change much.

  4. The Anonymouse says:

    I mostly find the hubris amusing. Much like how every generation has people who proclaim that, no really, this time it really is the biblical End Times, we seem to think it’s our current-day problems that are going to just happen to be the Great Filter.

    Of course, such things are not entirely arrogance; there’s usually politics attached. Folks who think it’s global warming, or AI, or deforestation, or nuclear weapons… well, they usually have an agenda to go with their theory. It’s just not as sexy to think, “huh, it was probably the emergence of sexual reproduction.”*

    *Or, for that matter, some completely unforeseeable Filter-level hazard that will emerge in the future. If you can’t hang an agenda on it, and use that agenda to beat your political adversaries over the head, it just isn’t that interesting, is it?

    • Oligopsony says:

      This strikes me as one of the better delusions to have.

    • MugaSofer says:

      Well, technology does improve over time. And the possibility of a Singularity would suggest we already lie unusually close to the end.

      For the record, I think nukes – that is, the Cold War – get dismissed too easily. Some of our close calls were so fine as to resemble some sort of anthropic weirdness “saving” us.

      But that would suggest we passed at least a sizable chunk, since we’re less threatened by nukes these days. Not as sexy.

      • roystgnr says:

        The trouble with nukes-as-Great-Filter is that they were an existential risk for our current civilization but not so much for our species. Not to minimize the suffering of a couple billion deaths, but anything which leaves even a couple hundred thousand survivors would just have us trying for the stars again in another few millenia.

        Biotech and nanotech, on the other hand, maybe? If the jump in capabilities from tiny life to tiny engineering ends up being comparable to the jump from, say, birds to planes, then intelligent civilizations might all eventually find themselves succumbing to the first crazy individual to fabricate a “supervirus”.

  5. Darcey Riley says:

    I worry that great filter arguments tend to anthropomorphize other species that may have evolved on other planets. It seems self-evident to us that life is something which tries to replicate itself as much as possible (since processes that are optimized for replication will eventually overtake those that aren’t), and as you say, it just takes one successful space-faring replication-optimizer to colonize the universe, so maybe I’ll accept that level of anthropomorphization.

    But the following bothers me:

    What about alien exterminators who are okay with weak civilizations, but destroy strong civilizations not because they feel threatened, but just for aesthetic reasons? I can’t be certain that’s false, but it seems to me that if they have let us continue existing this long, even though we are made of matter that can be used for something else, that has to be a conscious decision made out of something like morality.

    “Conscious decision” implies that alien exterminators are conscious and can make decisions; why should we assume either of these things? And even if they did make a conscious decision to let us live, there’s no reason to assume it’s based on anything like morality. Even among humans, morality is not always the key motivation; it only plays a particularly strong role in our society because we’ve rejected other motivations like religion and aesthetics.

    Since morality is such a human-specific thing (and morality as we know it is specific to our culture), I give way less probability to “aliens are keeping us around for moral reasons” than “aliens are keeping us around until they are ready to eat us”.

    • peterdjones says:

      We don’t know that morality is human specific. It might be the case that it always evolves under a broad set of circumstances, such as the need to allocate resources and avoid disputes.

      • Vulture says:

        What do we mean by “morality” here? Obviously any intelligence (or more broadly, optimization process) has preferences, by definition. What makes a certain set of preferences a system of morality, beyond simply being a utility function?

        • peterdjones says:

          Playing a certain functional role, ie allowing conflict settlement and resource allocation to take place within a group of rational agents on voluntary basis.

    • Paul Torek says:

      Within that context of Scott’s argument, “morality” could be a stand-in term for any preferences beyond mere survival and expansion, e.g. aesthetic preferences.

  6. Jess Riedel says:

    So I love the great filter as a concept, but I’m always a little surprised at the amount of handwringing done over the later filters. In the absence of contradictory estimates, abiogenesis seems easily improbable enough to explain everything. The chance creation of a self-replicating object from chemical components doubtlessly required many events that were each highly improbable and essentially independent. It’s easy to produce astronomically tiny probabilities from, say, various quantum tunneling calculations. Things like the Miller–Urey experiment and models of walls created by hydrophobic-philic materials don’t really help. The fact that life appeared only a few hundred years after conditions on earth could support it is only very weak evidence against its difficulty.

    Anyone know of a good lower bound on the probability?

    • Douglas Knight says:

      You can reconcile the difficulty of abiogenesis with the ease of life appearing on earth by invoking panspermia; the easy step was colonization. But panspermia means that the whole galaxy has passed that filter which is not reassuring.

    • The coexistence of cellular organisms, prions, crystals, and viruses suggests that ‘self-replication’ isn’t particularly physically difficult. Some of those replicators may have a common ancestor, but even so the sheer diversity of replicators on a single planet makes it unlikely that replication in itself is the Filter. Perhaps some other abiogenesis-associated thing is the Filter; I agree it needs to be very early.

      Also, life arose really, really, really early on Earth. If life is much more improbable than anything else leading up to intelligence, it should arise at the last possible minute, when it arises at all.

      • Desertopa says:

        If life arising is that improbable, it doesn’t mean that it should arise at the last moment possible. It may be tremendously improbable to flip a coin and have it land on its edge and balance, but if you flip a coin a trillion times, you’re no less likely to have it come up edgewise in the first hundred than the last.

        If your goal is to flip a coin until it lands on its edge and stays there, and once this happens, you stop, then if it happens early, barring other evidence, it’s probably not that unlikely. So by this metric, abiogenesis looks like it’s probably not *that* improbable. On the other hand, it probably wasn’t all *that* probable either, otherwise we’d probably have gotten multiple points of genesis. At the point where all the proto-organisms were all chemosynthesists feeding on the highly abundant chemicals in the ocean, they’re not really imposing much competitive pressure on each other, so one lineage wouldn’t have immediately killed off all the others. Maybe there could have been multiple genesis events, but the descendents of one lineage were so much more fit that once they did come into competition, they wiped out all the others, but it strikes me as improbable that if there were any other lineages they would be exterminated from literally every niche.

        Plus, our own attempts at experimental abiogenesis haven’t managed to produce replicators so far by any sort of process that would have been available on early earth. So to go back to the coin flipping analogy, if we had time for a trillion coin flips, we might have gotten a hit some time in the first million or so where it’s hard to distinguish it with our resolution from the very beginning, but we probably didn’t get it in the first hundred or so.

        • > On the other hand, it probably wasn’t all *that* probable either, otherwise we’d probably have gotten multiple points of genesis

          A lot of people suspect that there were multiple points of origin. All our data is consistent with multiple independent lineages having fused early on, or with one lineage having driven all the others to extinction. It’s consistent with current science for us to suppose, even, that a random independent abiogenesis event occurs once per week on average — but the new lineage is always quickly gobbled up by an organism related to us, because the new life lacks any evolved defense mechanisms.

        • Desertopa says:

          I already discussed the possibility of multiple lineages where one drove the others to extinction. I find that doubtful; we might see one lineage dominate, but I think it’s a lot less likely we’d see one lineage fill literally every niche.

          I didn’t discuss the possibility of spontaneous abiogenesis occurring in our present world, since of course, it’s unlikely that we’d notice.

          I did not address the possibility of multiple lineages which fused, which on consideration I suppose may be more plausible than a case where one lineage wipes out all others without fusion (or rather, a fuse-extinct scenario where the fused lineage is by far the most fit.) But on the other hand, to the best of my knowledge we don’t have any known examples of fused lineages which drove unfused lineages out of all niches. Incorporation of chloroplasts didn’t drive all chloroplastless cells to extinction, and incorporation of mitochondria didn’t drive all mitochondrialess cells to extinction. Fusion at such an early stage, where the proto-organisms were far less developed, could have offered a much more decisive advantage. But I wouldn’t bet much on that resulting in total extinction of all but one lineage.

          If the question is, “if we know that the world had a significant number of points of abiogenesis right at the beginning, would we expect to see the outcome we do today?” then I think the answer is “more likely than not, no.”

    • MugaSofer says:

      “The fact that life appeared only a few hundred years after conditions on earth could support it is only very weak evidence against its difficulty.”

      That … actually sounds like fairly strong evidence. If life isn’t easy-ish, why did it happen?

      After all, if we knew life was hard but didn’t know when it happened, we would assume it probably didn’t happen as soon as it could. That’s just an antiprediction, right?

      So why didn’t the antiprediction come true?

    • The_Duck says:

      I agree; it seems very probable to me that the filter is in our past. In addition to abiogenesis I think the Cambrian explosion is a plausible candidate. Billions of years passed between the origin of life on Earth and the Cambrian explosion. That’s what you’d expect to observe if the evolution of complex life is very rare.

      > only a few hundred years

      Surely you mean “only a few hundred million years”?

    • More on this: suppose life began from a spontaneously forming strip of RNA, which was able to self-replicate. The RNA strand would as short as possible while still be able to catalyze its replication, but that still may be as long as 200 codons. If the codons of the strand were formed randomly, there would be a 4^(-200) chance of creating life for every RNA strand. This is probably sufficiently unlikely that any casual patch of the universe would have at most one creation event.

      Now, there’s a corollary: Suppose there were also a completely different strand of RNA with 210 codons that was also capable of self-replicating. Then the chance of it forming would be 4^(210), or 4^(-10) less likely then the earlier mechanism. If two lifeforms encounter one another they are almost surely both from the more likely origin. In other words, almost all life would have originally formed in essentially the same way.

      Even granting my premises, it’s still unlikely that all lifeforms would exactly the same common ancestor. A self-replicator would probably have lots of small variations that work about as well, and it might even succeed with a section full of random junk somewhere in the middle. Still, all lifeforms should come from the same common design.

      A more serious attack would be to question my premises. If the shortest self-replicator had only 30 codons, or there was another mechanism that was even more probable for making life, then this line of reasoning is ruined.

  7. Troy says:

    Your arguments about the candidate filters you consider here are pretty good. I’m most skeptical about what you say concerning alien exterminators: it seems to me that there are enough variables concerning alien psychology, abilities, and decision-making processes that we can’t be confident that if they existed they’d have wiped us out by now. For example, perhaps they have some kind of “Prime Directive” against interfering with civilizations on their home planets, but any uninhabited worlds (or solar systems, perhaps) they take over and don’t let anyone else near. Maybe there’s substantial disagreement among the aliens what to do concerning other species: in the past they wiped out a bunch, but then some progressive aliens got upset at their intolerance, and now the progressives control the government. Or maybe they’re like the Formics/Buggers and didn’t realize that their genocidal actions were really genocidal, but now they do and so they’ve stopped.

    • von Kalifornen says:

      That… gets even worse, actually.

      I can see three potential forms of civilizations

      1. Planetary civilizations that may be visible to SETI but are not very prominent over huge areas. They may or may not have drama and disagreement going on. They may use low-speed starships to colonize slowly, but probably will have a hard time coordinating, and cannot project force between star systems. These include the humans in Avatar.

      2. Interstellar civilizations that have fast and/or robust starships. These things are going to be LOUD. Their engine flares, and any interstellar war weapons, will be spectacular and easily visible from earth. They will be more likely to coordinate, and colonies will be more likely to flourish and reproduce. They may be stealthy IFF they approach *us* at near light speed but not otherwise, and not once they pass us. These civilizations may be paranoid and a huge threat, depending on galactic game theory and the technologies for interstellar missiles and star-system antirelativistic defense. These include every space-opera ever, the Formics, and marginally The Killing Star.

      3. Trans-technologic powers that may have experienced a singularity or otherwise preferred miniaturization and the use of information technology over the manipulation of force and energy. These include most of the imagined paperclipping UFAIs, CelestAI, and Yudkowsky’s hope. They may have perfect coordination, and may colonize huge areas at relativistic speeds, or may spread themselves across light-years without anybody noticing.

      The first and last can do things without us noticing. The second would probably have to break physics to hide.

  8. Ialdabaoth says:

    Personally, I think the Great Filter is some variant of wireheadding – once you reach the level of science necessary to leave your home system, you ALSO reach the level of science necessary to understand your own construction enough to find and short-circuit yourself a “win” button.

    • N says:

      Probably not, for the same reason that ‘Transcendence’ isn’t a good filter — there would probably be some aliens that abstained from wireheading, right? Certainly it seems that many humans wouldn’t want to wirehead themselves if the technology were to become available.

    • Stephen says:

      But again, it only takes one species that doesn’t want to wirehead…

      I mean, just think of the human body. The only reason individual cells in the body don’t compete with one another is because they all share the same genome, and even then we need strong anti-cancer mechanisms to prevent uncontrolled growth. Now picture a bunch of random unrelated alien species out there with (presumably) wildly different traits. There’s no reason to expect any kind of analogous “anti-cancer” mechanism for species, so if even one of the aliens winds up with the desire to expand, they’ll do so and quickly come to dominate the galaxy/observable universe.

    • von Kalifornen says:

      I dunno. Our civilization already has the former technology (the most important ingredient is nuclear bombs. Lots of nuclear bombs. Also a huge nation-state.) but hasn’t implemented it. The latter technology is less compatible (at least for human-ish psychology) with supporting your civilization and may even be self-limiting: become decadent wireheads, Rome collapses, barbarism, some barbarians get a leg up technologically, and the Empire Of Science rises again.

      • Nornagest says:

        I’m not sure about that. Orion can get you between planets in a star system pretty easily, but it can’t get you past a couple percent of the speed of light, and designing a smallish (ship-sized to city-sized) self-contained colony that’s stable over generations strikes me as a much harder challenge for present-day engineering than shock dampers and a ton of nuclear bombs would be.

        • lmm says:

          Are there any recorded cases of the kind of social failure scenarios people seem to worry about for generation ships? I mean, we colonised e.g. Australia mostly by dumping a load of criminals there and leaving them to it, and they managed to put together a decent civilization. Even somewhere as tiny as Pitcairn (again colonized by criminals) managed to sustain a more-or-less stable society (with what sounds like some unfortunate sexual dynamics, but nothing that would prevent a colony ship working).

        • Nornagest says:

          I don’t know of any cases of social failure under similar circumstances, although similar circumstances are hard to come by. I wasn’t talking so much about social engineering as about boring old physical engineering, though; stable closed biospheres are pretty tough to make, and as far as I know ones of that size have never successfully been implemented. Certainly not by the 1970s.

          From a social engineering perspective, I wouldn’t be concerned with wholesale collapse so much as with loss of the skills needed for colonization. A stable generation ship is a pretty different environment from a growing planetary colony.

    • MugaSofer says:

      We have wireheading, and have done for quite a while. Yet somehow, as a species, we utterly dodged that bullet so far.

      Admittedly the cost of entry is quite high right now, but honestly “really cool videogames” seems much more plausible as a world-eater than “wireheading”.

      (And it doesn’t seem that plausible, as Scott said, because there would be at least some defectors from decadence.)

      • Faul_Sname says:

        I think “really cool videogames” is, in this context, considered “wireheading”.

  9. This is probably completely wrong, but what if unfriendly AIs have been created lots of times, and they always turn into dark matter-optimizers? Since we don’t know what dark matter is, what if it’s some substance that unfriendly AIs always decide to create for some reason?

    • Douglas Knight says:

      That’s basically the transcendence hypothesis.

    • von Kalifornen says:

      Shouldn’t that collapse the galaxy eventually?

    • MugaSofer says:

      That’s actually a damn good hypothesis.

      Needs details to justify it, though, which of course decrease the total probability it’s true – but maybe not enough to rule it out.

      What are they making it out of? Why can’t they convert stellar matter into dark matter? Why haven’t they squashed us, if we might create a superintelligent AI threat? How do they power themselves – they’re certainly not eating suns to get it. Are there multiple kinds of dark matter? If not, what the heck has everything converging on this one goal? If there are different, competing brands of dark matter … how does this not reduce to a regular aliens-shoulda-gotten-to-us-by-now scenario?

  10. blacktrance says:

    Which is all nice and well, except what about the Amish aliens? What about the ones who have weird religions telling them that it’s not right to upload their bodies, they have to live in the real world? What about the ones who have crusader religions telling them they have to conquer the galaxy to convert everyone else to their superior way of life? I’m not saying this has to be common.

    But if it’s sufficiently uncommon, it could still serve as a filter. Suppose if there were 10,000 advanced alien races focused on expansion, we would’ve met one or two of them by now. But if there are one or two expansionary races and the other 9998 are wireheading or are engaged in some other passive activity, that could explain why we haven’t detected anyone yet. You say “it only takes one”, but if there’s only one, it could take a very long time for it to expand far enough to be detectable from Earth.

    Alternatively, maybe there are alien races that are theoretically detectable, but the signs of their civilization are so bizarre that we don’t think of looking for them, or maybe we already possess all the information we need to detect alien life and we’re just not parsing it correctly.

    • Kaj Sotala says:

      You say “it only takes one”, but if there’s only one, it could take a very long time for it to expand far enough to be detectable from Earth.

      http://www.sciencedirect.com/science/article/pii/S0094576513001148 :

      The Fermi paradox is the discrepancy between the strong likelihood of alien intelligent life emerging (under a wide variety of assumptions) and the absence of any visible evidence for such emergence. In this paper, we extend the Fermi paradox to not only life in this galaxy, but to other galaxies as well. We do this by demonstrating that travelling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilisation, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to. Given certain technological assumptions, such as improved automation, the task of constructing Dyson spheres, designing replicating probes, and launching them at distant galaxies, become quite feasible. We extensively analyse the dynamics of such a project, including issues of deceleration and collision with particles in space. Using similar methods, there are millions of galaxies that could have reached us by now. This results in a considerable sharpening of the Fermi paradox.

      (emphasis mine)

    • von Kalifornen says:

      Most ways that civilizations or FOOMing AIs can expand rapidly are really loud (in terms of starship engine flares in the first case and rapidly de-atomized territory visible by telescope in the second) even outside their domain.

  11. Vanzetti says:

    “Life in this world,” he said, “is, as it were, a sojourn in a cave. What can we know of reality? For all we see of the true nature of existence is, shall we say, no more than bewildering and amusing shadows cast upon the inner wall of the cave by the unseen blinding light of absolute truth, from which we may or may not deduce some glimmer of veracity, and we as troglodyte seekers of wisdom can only lift our voices to the unseen and say, humbly, ‘Go on, do Deformed Rabbit . . . it’s my favorite.’ ”

  12. File 13 says:

    IMO the great filter is that space is really big. (cue Douglas Adams quote)

    • Steve says:

      But space is also really old, which more than cancels this out. The galaxy is 10^5 times as old as it is wide.

  13. JayMan says:

    About extraterrestrial intelligence? Bottom line:

    We don’t know. We (for now) have no way to know. All our evidence so far and the mathematical implications of such say it’s extremely likely – essentially a given – that they exist. Given our ignorance on the matter, it is the height of arrogance, at this point, to declare that we indeed have evidence of absence (maybe once we’ve done a reasonable census of our local galactic region, that might change). End of story.

    • MugaSofer says:

      So … you’re content to ignore strong evidence of an upcoming extinction event rather than … reason under uncertainty, which is something we do constantly?

      Given our ignorance on the matter, it is the height of arrogance to declare we’re safe until it’s proven otherwise to you satisfaction. That’s privileging the hypothesis. End of story.

    • jsalvatier says:

      Nonsense. We have evidence of absence. We don’t have proof of absence but we certainly have evidence. http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/

      • Eli says:

        That only applies to evidence you would expect to see. What is our belief that our current instruments can detect the kind of alien life we expect to see? If that belief is very low, then we lack Bayesian evidence against the existence of aliens, because we lack instruments we would expect to pick up such evidence.

  14. peppermint says:

    YOU EXIST BECAUSE WE ALLOW IT, AND YOU WILL END BECAUSE WE DEMAND IT

    Sorry, I just can’t take this topic seriously, so I’m quoting the best line from the game Mass Effect. Back when XKCD was good he made a good comic about this issue – http://xkcd.com/384/

    • von Kalifornen says:

      Hmmmm… The Reaper Wars will be loud, so the extermination epoch must be larger than the diameter of the galaxy in LY.

      Unless they are just a tiny, self-replicating something OH CRAP.

      May God Save the empire.

    • lmm says:

      XKCD was never good. (Or rather, was never better than at present)

  15. Vaniver says:

    if our solar system had a slightly bigger, slightly closer version of Mars,

    Apparently, there was a ~10 year period when it was plausible that Venus was as hospitable as the Earth. What could have been!

  16. suntzuanime says:

    On the other hand, if the Great Filter is behind us, that makes our existence extremely improbable, and I no longer trust the Anthropic Principle to sweep that improbability under the rug. The Great Filter is probably in front of us, and whatever form it may take it is hard to come up with one that’s good.

    • I think we exist too late in the universe for the bulk of the Filter to be in our future, even if we had any credible mechanism that could do the job.

      I don’t grok your objection to a past Filter. If the universe is infinitely large, life (for example) could be arbitrarily improbable, as long as its probability is nonzero.

  17. F. says:

    Let’s say that out of all civilization that ever reach lightspeed expansion technology, 99 out of 100 behaves like genocidal paperclip makers, and the 100th behaves as “exterminators who are okay with weak civilizations, but kill them when they show the first sign of becoming a threat”. Let’s call these “galactic cops”.

    If this is true, then we must be living in galactic cop territory (since we aren’t paperclips).

    Scott says that galactic cops cannot be the great filter because “don’t flatter yourself here. You cannot threaten them”. But this doesn’t really make sense to me. There must be some point at which a civilization growing within their territory, becomes a rival. Just like a growing cancer is only a tiny lump of tissue, but at some point, it becomes dangerous. Only at that point, they care to intervene (not necessarily by genocide, maybe just by imposing limits), and their regular intervention makes sure that they remain the only galactic civilization.

    So the question remains, why don’t we see their presence in the sky? Well, it is most likely that the entire observable universe is in the territory of a single galactic cop civilization. If so, then there’s nothing surprising. What you see in the sky IS what the galactic cop territory looks like. Maybe they modified it, and maybe not – maybe large scale engineering just isn’t their hobby. Or maybe they only care about dark matter.

    The point is, if only 1 galactic civilization engulfs the observable universe, then nothing is surprising.

    • MugaSofer says:

      Good points.

      In fairness, I think this is kind of what Scott is proposing – he just doesn’t think (say) nuclear weapons could possibly be the difference between the prime directive and “threat – exterminate”.

    • Russell says:

      Yes the enforced non-interference does work. They would be forced to stop us interfering with life in other solar systems because of symmetry etc. If we made a Von Neumann Replicator or discovered primitive life on another solar system and tried to stuff it up then that would solve the paradox. If we are stopped, then we are in galactic cop territory otherwise if we are not then we are the first intelligent civilization in the galaxy.

      Rather than cops they may be “mad scientists” who like watching civilizations grow, then only interfere when one experiment threatens to disturb another (interstellar travel). Yes this does seem pretty contrived however and my most likely place for the filter is that life is very hard or complex cells are hard.

  18. F. says:

    Then again, the quickest answer to Fermi’s paradox might be a much simpler one: What the heck do you expect to see?

    Why expect alien civilizations to care about us enough that they answer our messages? Would you hold a conversation with an ant hill?

    Why expect the sky to look dramatically different if anyone is out there? If you look at the Earth from the orbit, except at night when there are electric lights, it doesn’t look any different with or without humans. It might be that the sky just doesn’t look any different with or withour aliens.

    Why expect them to come to Earth for resources? Maybe the particular resources that are most valuable to them, cannot be found in this solar system. Just like we mine certain spots of desert for oil, but not other spots of deserts where there is no oil. Maybe they live off darkmatter, and do not mine carbon or whatever because that would mean diverting resources from darkmatter mining.

    Also, if their scientists are watching us, it’s likely that they don’t want to be seen, like we do when we study animals in their environment.

    I understand that Scott’s answer to this would be the one he puts out under “the great filter is trascendence”, that is to say, it only takes one alien civilization that happens to be interested in colonizing Earth. But you see, if there are multiple interstellar civilizations, then they also have mutually exclusive territories. Even if they don’t compete for similar resources (and of course they do), at least they would establish zones of control simply to secure their own safety from alien aggression. We aren’t within range of an unlimited number of interstellar civilizations. This by the way is a variant of my argument in the previous post. You can combine the two.

    • roystgnr says:

      I’d expect to see stellar spectra that have stopped looking like pure low-entropy photosphere output and started looking like high-entropy thermal radiator output.

  19. John Maxwell IV says:

    Another point against the filter being a garden-variety x-risk: Look at bonobos. They’re one of the smartest species on the planet, but they’re apparently much better at loving one another and cooperating with one another than we humans are. So it’s plausible that many planets that get to the “intelligent species” phase do so with a species like bonobos, and bonobos seem like they’d be much better than us at avoiding nuclear war and whatnot.

    • ADifferentAnonymous says:

      The Machiavellian Intelligence Hypothesis says that intraspecies competition is what drove the human intelligence explosion. If this is true, it’s no coincidence that bonobos didn’t become the intelligent species.

  20. F. says:

    Short version of my answer to the paradox:

    1 – If there are space civilizations, then they have mutually exclusive territories.

    2 – Therefore, we are under control of only one space civilization.

    3 – Why don’t we see anybody? because the particular civilization around us happens not to care about contacting us or mining our system or altering the galaxy so much that we can tell it must be aliens.

    The anthropic principle makes part of (3) necessary (we’d be dead otherwise).

    • anon says:

      I don’t think that’s a valid use of the anthropic principle because the goal is to distinguish between reasons that we’re not dead.

      • F. says:

        Unfortunately I don’t understand why the anthropic principle doesn’t apply. In case I didn’t word it clearly, I just meant that a part of the statement (3) (that our space overlord, having already assumed that they are there, aren’t into exploiting Earth) follows from the anthropic principle, not that the entirety of (3) or that the entire argument follows from it.

    • von Kalifornen says:

      That’s one really quite civilization.

  21. Vadim Kosoy says:

    Good post. Nitpick:

    “if our solar system had a slightly bigger, slightly closer version of Mars, then we could restart human civilization anew there once we destroyed the Earth and maybe go a little easy on the carbon dioxide the next time around.”

    There are no fossil fuels on Mars so there’s nothing to go easy on next time.

  22. Kaj Sotala says:

    The “this isn’t strong enough of a filter to filter out everyone” argument is valid if a filter is proposed as being THE filter, but it doesn’t exclude the possibility that e.g. ordinary x-risks wipe out some civilizations and transcendence locks in the rest. (I’m not saying that I would consider this a very probable explanation, but I’m also not saying that I’d consider it terribly improbable.)

    The notion of combined filters actually raises an interesting thought. Suppose that there are two filters (say) in front of us, and the first one wipes out a lot of civilizations and the second one wipes out the rest. If this were true, then it suggests that having the kinds of civilizational traits that cause you to survive the first filter anticorrelate with surviving the second filter, and vice versa.

    Large-scale coordination ability might plausibly be one thing that caused such an anticorrelation. For example, suppose that technology continues making it increasingly possible for lone individuals or small groups to cause large-scale damage, to the point where e.g. a bioengineered pandemic could threaten our continued survival as a technological civilization. Now, I imagine that the civilization most likely to survive such a period would be one that had very strong coordination capabilities and was effective at wiping out defectors and deviants. Drawing on sci-fi cliches, maybe some insect hive mind civilization.

    But then, this kind of a civilization would also be very good at wiping out the weird deviants who didn’t want to convert their planet into computronium and live the rest of their lives as uploads, never leaving the rest of the solar system! Thus they would fail the second filter for exactly the reason for why they survived the first one. And vice versa – if humanity survived to the second filter, there’s a good chance that we’d pass through it to colonize the rest of the galaxy, but it’s exactly because of that that we won’t survive the first one.

    (These examples were chosen mostly for the sake of illustration and I do not hold that they would necessarily be the most likely ones. In particular, having just two future filters may be the biggest oversimplification in the example: maybe there are ten of them (as well as several in our past), each of which removes a small percentage of civilizations but which together combine to remove all of them.)

    • Vadim Kosoy says:

      I think those sort of arguments are inconvincing since it’s easy to imagine aliens that pass all possible filters (e.g. paperclip maximizers) and therefore we need much stronger arguments to explain why this kind of aliens is much less probable than all other kinds. On the other hand, I don’t see any good argument against a Great Filter in the past (e.g. replicator formation) which makes it, in my view, a more likely explanation by far.

      • Kaj Sotala says:

        “It’s easy to imagine a civilization that passes all possible filters” isn’t evidence for it actually being possible for such a civilization to evolve. E.g. we could wipe ourselves out before ever getting to the point where we could actually build a paperclipper.

  23. anon says:

    Their risk of being crushed by a rival seems to be greater if they choose to expand, rather than smaller.

    • von Kalifornen says:

      That is a possibility, although it doesn’t seem anywhere near enough. Interstellar war is a bit… weird.

  24. The filter is probably early, since (a) we live in an old universe, and (b) there’s not much room for us to make a big discovery about the improbability of space colonization, whereas there’s plenty of room for big new discoveries about how hard it is to evolve multicellularity, or cell walls, or basic defenses against parasitic ‘free rider’ genes.

    If something independently arose twice in our evolutionary history (e.g., intelligence in cephalopods and tetrapods), we can mostly rule it out as a filter candidate. Things that arose very quickly (including life itself) are also poor candidates.

    The chemical components for life as we know it don’t seem rare; and self-replicators seem pretty diverse, since we have several fundamentally different varieties (prions, viruses, cellular life, etc.) on our planet alone.

    My guess would be that there are a lot of extremely powerful, easy-to-evolve biochemical mechanisms that block further evolution; and/or a lot of easy-to-evolve mechanisms that make evolution too rapid and low-fidelity for adaptations to accumulate that depend on multiple organism components; and we got lucky in happening to avoid both classes of adaptation, or in happening on an early adaptation that prevents those things.

    I have a harder time seeing how things like multicellularity or sexual reproduction could be great filters; sexual reproduction doesn’t seem difficult or indispensable enough, and mutualisms akin to multicellularity seem to have arisen lots of times.

  25. spandrell says:

    Obviously the Great Filter is the leftist singularity.

    Once civilizations reach a certain level of wealth, junior members of the elite start complaining about rights for sexual deviants, cross-dressers and reparations for the descendants of slaves from 2 centuries earlier.

    Next thing you know civilizations lose the ability to visit space. Then it gets better after a couple of millennia until the descendants of malaria-resisting slaves start getting journalist jobs again.

    Cycles come and go until a big asteroid impacts the planet, completing the Great Filter.

    • von Kalifornen says:

      You think that the aliens are going to have our political problems? These are ALIENS, not the Kohms and Yangs from that one Star Trek show.

      Even a fairly broad generalizaton of power-vs-perfection struggle STILL isn’t enough to cut it off anywhere near perfectly. Look at modern Russia and China — they seem to be somewhat above these problems. Tip the dial of authority just a little further toward fascism and you might really get the thousand-year realm that launches the dreadful ships and tiles the universe with Aryans. Or New Soviet Men. Or maybe the leftist singularity starts later and launches the ships as its last great project — remember how it began!. What if Rome fell the later, or the First World War never wracked Europe? Perhaps knowledge flees the fading light of the West to Arabia, like it did before!

      We went from basic preindustrial technologies to a rapid approach to space-faring in only 260 years. That’s really fast.

      Then multiply these chances by a million for aliens who have politics unimaginable by you or me.

    • ozymandias says:

      It is a little-known fact that crossdressing destroys spacefaring ability in all civilizations, including those that don’t have sexes or clothes.

      • Piano says:

        If members of a sexless naked civilization started covering themselves in the name of the existence of some “sex” thing, where “sex” is the appropriate divisive nonsense, then yes, that would probably fuck things up if the movement got popular enough with high-status people. “Elites engaging in sophisticated fashionable nonsense that they can survive but non-elites cannot” is a universal enough concept to apply to aliens. Examples include not eating babies in the name of Babyeating!Aristotle’s Ethics, and faking living in the real world in the name of Pleasurebeing!Plato’s parables.

        • ozymandias says:

          How do you know aliens even have elites?

        • Piano says:

          @ozy
          The alternative is every alien of a certain population being exactly the same, which is highly unlikely. Where there is any difference at all, there is pressure for hierarchy.

        • Anonymous says:

          Are you talking about the Fall?

    • anodognosic says:

      Boy, everything really does start looking like a nail, doesn’t it?

      • spandrell says:

        A very neglected corollary of that saying is that people who have never used a hammer tend to not see nails even when they’re staring at them in the face.

        • anodognosic says:

          “A Marxist could not open a newspaper without finding on every page confirming evidence for his interpretation of history; not only in the news, but also in its presentation — which revealed the class bias of the paper — and especially of course what the paper did not say.”

          But I’m super glad your pet theory explains literally everything.

  26. Why can’t the great filter be “a sequence of smaller filters”? If you have three mostly independent filters which you have say a one in a thousand chance of making it through (e.g. blow ourselves up, pollute our environment beyond our ability to fix it, succeed but decide interstellar travel is too hard and wirehead ourselves to extinction instead) then there’s a one in a billion chance right there.

    • suntzuanime says:

      Most likely the great filter is something we haven’t even thought of. Anything we can think of, we can cleverly plot to avoid, but that which we can’t forsee we can do nothing but run into.

    • Prima facie, none of the candidates for ‘Great Filter’ status look like they can be even mediocre filters. So in hypothesizing a filter, you’re trying to predict a relatively specific fundamental error in our picture of the universe. Three mediocre filters might work just as well as one Great Filter, but it’s perhaps likelier that we’ve made one very specific mistake that’s very, very, very large, than that we’ve made three very specific mistakes that are of middling size.

      • anon says:

        I don’t see smaller filters as multiple moderate mistakes in our understanding of the universe. People generally acknowledge smaller filters existence. The failure is singular and large, that we fail to tally all the small filters together and recognize their total impact.

        • If you’re suggesting a single error to account for a general huge overestimation of the likelihood of major events, then it sounds like you’re pointing the finger at a cognitive bias. What bias do you think is responsible?

          If anything, the opposite argument looks more compelling to me: We should expect our biases to make us underestimate the likelihood of spacefaring civilizations, because we can’t think of most non-Earth-like scenarios that would result in such civilizations, hence we neglect how disjunctive the event is.

  27. Valhar2000 says:

    I think the more likely explanation is that interstellar travel is not possible, or, if it is, it requires such an investment that civilizations that do it severely curtail their ability to survive. Thus, any civilization that attempts this one too many times sees its economy collapse, and its culture and population follow soon thereafter.

    • von Kalifornen says:

      That seems doubtful itself.

      Let’s look at three ways

      The Cheap way: Needs 1970s-2010s technology for starflight and as-yet-uninvented biotech, or just a really big freaking ship. Amazingly not-that-bad.

      The Awesome way: Also needs 1970s tech, but on a huge scale. (but not that huge.) Look at spending the Cold War nuclear arsenal. We built that, and never used it, just fine.

      THe Smart Way: Singularity and tiny self-replicating nanostuff. This one might UFAI you.

      Also most collapses of this type are eminintly recoverable. We are close enough to starflight to see the way — there is nothing super-dangerous about it.

    • F. says:

      It doesn’t seem to be so difficult.

      http://en.wikipedia.org/wiki/Nuclear_pulse_propulsion

      “Project Daedalus was a study conducted between 1973 and 1978 by the British Interplanetary Society (BIS) to design a plausible interstellar unmanned spacecraft that could reach a nearby star within one human scientist’s working lifetime or about 50 years.”

    • @JohnWBH says:

      I’ve also heard it theorised that as the speed of light is an unassailable limit, and we assume high tech civilisations will need a lot of communication, i makes sense to have our civilisation as dense as possible in order to reduce the latency caused by lightspeed. So instead of expanding hypertech civilisations might say in a relatively small area of space in a dense computronium block.

      • Paul Torek says:

        Suppose you live in a dense computronium block, but you’ve invented a new religion, or you’ve mutated, or for some other reason don’t like the block-world. Or you like the block-world just fine but feel an urge to create a whole ‘nother block-world somewhere else. What’s to stop you? You’ve got sufficient technology and wealth.

        People like you could be extremely rare in the block-world of origin, but you’d soon become typical among the spreading block-worlds.

  28. Felz says:

    My favorite explanation is still transcendence. Specifically, imagine that there’s something that’s always more resource rewarding than bothering with reality as we know it; maybe aliens figure out how to manufacture pocket dimensions. Then it might always make sense to create more pocket dimensions than bothering to send a colony seed to another star. Any intelligent civilization would probably take that option, and we wouldn’t see them.
    I personally imagine aliens led by a AI might optimize themselves out of detectable existence rather than pocket dimensions, though. If the civilization accepts simulations to have utility, then it follows that any representation could have utility; a rock could in a way be said to contain as much computation as conventional computronium. The AI might progressively offload its computation into rocks with some interfaces to the computronium. We have much the same situation as with pocket dimensions: its easier to just create more ‘imaginary’ worlds than actually conquer physical ones. Eventually the computronium left over is ditched as being too limiting to interact with, and the civilization effectively erases itself.

    • F. says:

      The problem with trascendence is that such a civilization would still need to expand in order to prevent being destroyed by an alien, expansionist civilization.

      Besides, as Scott wrote, it only takes one non-trascendent civilization to settle the sky.

      • jaimeastorga2000 says:

        Besides, as Scott wrote, it only takes one non-trascendent civilization to settle the sky.

        Or one non-trascendent faction in a civilization. With sufficiently advanced technology or quirky alien reproductive biologies, it might just take a single individual.

      • Alejandro says:

        What if “transcendence”, whatever it is, reveals something not only much more enticing than ordinary physical reality, but also much more powerful than anything in it could be? Powerful enough that if you choose “transcendence”, you become invulnerable to puny invasions through ordinary physical space from ordinary physical civilizations?

    • MugaSofer says:

      “If the civilization accepts simulations to have utility, then it follows that any representation could have utility; a rock could in a way be said to contain as much computation as conventional computronium. The AI might progressively offload its computation into rocks with some interfaces to the computronium.”

      That … doesn’t work. The roc-computronium “interface” would have to, itself, simulate whatever the rock was supposedly simulating order to “decode” the roc’s output (that is, noise.)

  29. Papermoon says:

    If we assume that a billion year old civilisation is just an ego enforcing it’s will in a rapidly expanding bubble across the universe. Then it seems odd to just assume we’d notice. If it’s a paperclip machine then yeah, but a billion year old civilisation….

    Just look at the surface of the earth. The canyons of city blocks, artificial islands, reservoirs and trash heap mountains. Fundamental forces like geology have been overwhelmed by human will. Can a rat in the city tell that the city is the will of some advanced civilisation? What about a fish in a reservoir?

    So for all we know, the value of pi and the particular groups of 3 that quarks come in are all the result of us being in someones bubble.

    Don’t forget that the laws of physics can be hand waved away just with the idea of “simulation”. Fermi’s paradox is just far too open ended.

  30. @JohnWBH says:

    I’m not sure how to cash this out mathematically, bu I’ve always felt that the great filter might just be an example of anthropic bias. If we assume that any civilisation that had built structures visible at x lightyears could have reached our system in that time, and we assume that their interaction would have prevented humans evolving (or reaching civilisation or whatever). It would then be a necessary condition of the existence of humans that the aliens are not in a near enough area to have interacted with us. Does that make sense ?

    • Steve says:

      Anthropic reasoning is not an excuse. Anthropically speaking, the Earth has a gravitational field, because if it didn’t we wouldn’t have an atmosphere to breathe, and we wouldn’t be here. But there’s still a precise mathematical model for the attraction between masses at various distances.

      For outcomes you’re very confident maximize entropy–say, a photomultiplier in front of a 50% effective mirror recording “1001011111001”–you can just say that, if that weren’t the outcome, there wouldn’t be an observer who recalled seeing that outcome. But for something that may have a simpler explanation, like why the universe is 14 billion years old and we only observe a single civilization, we want something beyond anthropics.

  31. peterdjones says:

    The rejection of trancendence, and other options based on convergence, is a bit glib. If all sufficiently advanced species converge on some non expansive goal….then they all do, and it’s no refutation to say they will have some other goal. If the attractor conflicts with their existing religion, or whatever, then their existing religion will be seen as wrong. That they might not converge is not a specific criticism , but a way of saying that the convergence claim isn’t necessarily true, like most theories.

    • Stephen says:

      I think the implicit argument there was that transcendence attractors covering all of evolved mind-space seem pretty unlikely. That would be my answer, anyway.

      • peterdjones says:

        It seems so to less wrongians sure enough, and seems is the word. There’s a widespread assumption that all these weird minds would converge on physical and mathematical truths, and that they nonetheless wouldn’t on all that fuzzy morals and values stuff. Not backed by any relevant argument.

      • peterdjones says:

        “On Wei_Dai’s complexity of values post, Toby Ord writes:There are a lot of posts here that presuppose some combination of moral anti-realism and value complexity. These views go together well: if value is not fundamental, but dependent on characteristics of humans, then it can derive complexity from this and not suffer due to Occam’s Razor.There are another pair of views that go together well: moral realism and value simplicity. Many posts here strongly dismiss these views, effectively allocating near-zero probability to them. I want to point out that this is a case of non-experts being very much at odds with expert opinion and being clearly overconfident. In the Phil Papers survey for example, 56.3% of philosophers lean towards or believe realism, while only 27.7% lean towards or accept anti-realism.”

    • Scott Alexander says:

      Even if there is a morally correct goal, I expect that there can be some civilizations that can discover space travel without discovering the goal. Surely some aliens just won’t be very philosophical, or will all be psychopaths, or something. Or there will be factions among them that are.

  32. AndR says:

    I don’t see how this is supposed to make the great filter less scary, other than perhaps suggesting that it’s not an immediate risk.

    If garden variety X-risk is so much easier to get right than the great filter anyone even slightly concerned about X-risk should be really scared of the great filter.

  33. J. Quinton says:

    The combined Unfriendly AI/alien exterminators are obviously waiting for us to find the Mass Relays before they consider us a threat.

  34. Scott says:

    My favourite: the Great Filter is a line of code in the simulation we are living in. Simulating brains is computationally expensive compared to simulating planets and stars, the simulator saw no reason to needlessly tax their computational resources by simulating millions of civilisations. This can also explain why there is just one civilisation – the simulators may not have the resources to handle multi-civilisation simulations, or they might find the results of multi-civilisation simulations banal (if the simulator’s universe doesn’t have the equivalent of our Fermi paradox – if there *were* near-lightspeed waves of colonists expanding outwards and intelligent species obliterating each other because game theory, and by dint of being early the simulators were able to take control of the universe, they might not be very interested in playing that out again).

  35. moridinamael says:

    It’s fossil fuels.

    I already talked about this on less wrong before.

    In order for large quantities of fossil fuels to form, you need to have
    (1) carbon-based life
    (2) that has existed for a long time
    (3) in vast quantities
    (4) that has been successfully buried via sedimentation
    (5) and not oxidized prior to complete burial.
    You also need an
    (6) oxidizing atmosphere
    (7) that isn’t so oxidizing that it immediately oxidizes everything
    (8) but is oxidizing enough that the fossil fuel will burn sufficiently after you dig it up.
    Then there needs to be
    (9) enough fossil fuel in the ground
    (10) in easily accessible, economical deposits
    to permit the dominant species to actually use it to fuel that species’ technological growth from pre-industry to developing a totally fossil energy-independent infrastructure before running out. (Something we haven’t actually succeeded at and may actually totally fail at, and this may be what gets us.) Each of these 10 bullet points (not an exhaustive list, just the first ten major bottlenecks I could think of as an expert riffing before my coffee) may be rare in a planetary sense. Multiply ten rare things together and you get a small number. Add that into the Drake equation or whatever paradigm you wish, and there’s your Great Filter.

    Somebody in the course of the previous discussion seemed to think that if there were, say, one tenth as much fossil fuels lying around, we would just be more efficient. Or if there were *none*, we would just develop a biofuels-based infrastructure. No. If we didn’t have hydrocarbons in the enormous, excessive, gluttonous quantities that we find them, the world population would still be around half a billion people living off wood fired stoves. Nobody would just have the magical intuition to know that we should invent a biofuels infrastructure. Not to mention that it makes about as much sense as the machines using the humans’ body heat the generate power in The Matrix.

    • F. says:

      “Nobody would just have the magical intuition to know that we should invent a biofuels infrastructure.”

      I wouldn’t be so sure. Technological improvement has been exponential since humanity began. Without fossil fuels, that particular stage of development would have been slower, but there’s an immense amount of time for an intelligent species to develop unforseeable technologies.

      • moridinamael says:

        Undeniably true, but that actually agrees with my point.

        Let’s say I’m a bronze age farmer-genius who somehow figures out how to cook ethanol from my maize crop, and, improbably, there already happens to be a combustion-type engine of some kind in existence invented by some alchemist. Without a fossil fuel infrastructure already in place to run my farm, I’m limited to hauling corn around on oxen, etc., and I still have a feed my family, and tithe my lord, so there’s a very tiny amount of corn that’s going to be used as feedstock for my ethanol bioreactor. You can only create exponential growth if you plug the outputs of the system into the inputs, i.e. if the fuel you’re producing its being used by the farming process, and you sell only what you can afford to sell, which will be almost nothing at first. And considering that energy that’s going into producing it, consider the price that you’d have to be selling it at? What’s your market? Nobody has cars. There are no roads.

        So maybe my grandson will actually be able to ride the ethanol-powered tractor around the farm harvesting corn in two generations. Maybe that’s optimistic, maybe pessimistic, I don’t know, I haven’t done the math. My intuition says in this scenario it will be several generations before biofuels are making any kind of impact on the global stage. Edit: And then of course, every third generation there’s a war that wipes out the local infrastructure and resets the clock. So, I frankly don’t think this would ever take off. Technologies increase exponentially but infrastructures don’t until they reach a critical mass.

        Compare this to what happened on earth where we drilled a hole in the ground a billions of barrels of incredibly dense fuel literally gushed out.

        • F. says:

          I can imagine the corn powered tractor being valued by a lord as a funny curiosity and kept safe. I also think it’s an exaggeration to say that every third generation a war wipes out the infrastructure.

          In a fossil fuel free world I can imagine electrical generators being invented and combined with pre-existing hydraulic technology to create a hydroelectric power plant. Then the population density around Niagara falls skyrockets as electrically powered factories are built. Maybe there can be something similar for geothermal electricity. Maybe the electricy isn’t even initially needed; maybe old fashioned hydraulics are enough to power an industrial revolution once the social conditions and the agricultural science (see Wikipedia articles titled “history of fertilizer” and “british agricultural revolution”) are ripe for it. Electricity could be a later development. I grant you that I can’t see how mechanization of agriculture would happen at this stage. But I imagine that hydroelectric industrial cities would make it possible to have a steady technologic and scientific development until biofuels. Or nuclear power, and then easy recharges for electric tractors and cars thanks to a network of nuclear power plants.

        • F. says:

          In general, what is really required for an industrial revolution is not motorization, but factories. Without stuff like coal-powered trains, goods would have continued to travel on rafts and sailing ships like they did for millennia, and population would continue to be concentrated along rivers and coastlines, like it did for millennia. This does not preclude industrialization. What really matters is factories, and in the initial stage of industrialization what factories require is surplus manpower, and this would have happened in a setting comparable to 18th century England, coal or not, thanks to the advances in agriculture such as chemical fertilizer, which free up manpower. All the remaining development follows from the rise of factories. The great inventions of the 19th century, explosives, electricity, the telegraph, advanced steelmaking, many of which were simply the logical continuation of previous developments, would have happened with or without coal thanks to the boost to development that comes from factories. Afterwards, like I said above, you get hydroelectric power and hydro-electrified factories (conveniently placed on waterways that enable motor-free transportation of goods, even more so with explosives-dug canals), and you continue from there until biofuels and nuclear power.

        • Inner Partisan says:

          What you fail to see in focusing on ethanol when talking about biofuel, is that there is indeed another kind of biofuel; one that has been produced since (at least) the bronze age, and used to power cars as recently as the 1950’s: charcoal.
          Yes, in a scenario without readily available, abundant fossil fuels, a civilization would most probably take a couple millenia more to attain/go through industrial revolution – but it’s in no way impossible (or even improbable).

        • moridinamael says:

          I don’t know. WolframAlpha tells me that the world consumed about 5×10^20 Joules of energy this year. That’s a lot of goddamn charcoal. At 30 MJ/kg energy density, that would be roughly 1.6×10^13 kg of charcoal to match our current energy consumption patterns. WolframAlpha helpfully informs me that this weighs as much as 1/5 of the total biomass on the earth.

          I realize I am being absurd here, my point is that fossil fuels are an extremely, absurdly, ridiculously abundant source of portable, high-deliverability energy and literally any other fuel source you can name is not going to come within six orders of magnitude of being a competitor in terms of efficiency.

      • John SF says:

        “Technological improvement has been exponential since humanity began”
        Sorry, no.
        Over c.250,000 of homo sapiens what you see is a l-o-o-o-n-g period of relative lack of change then an exponential uptick at the end.
        Arguably even post-neolithic, most cultures in most places were more static than changing until you get to c.1500 in Europe.

        • Anonymous says:

          Most people can’t tell the difference between an exponential and a hockey stick.

    • Jake says:

      This is a far too Earth-centric and ‘our own contingent history’-centric. If there were not so many fossil fuels on Earth our history would certainly have been different, but it’s fairly absurd to say that it would rule out there being a scientific revolution. It’s not unimaginable that a lack of fossil fuels could have delayed our civilization’s development, or even halted it altogether, but it’s not nearly the kind of insurmountable difficulty Scott describes – on the level of destroying 999,999,999 out of a billion species.

      Imagine that there were no fossil fuels on Earth for whatever reason, and that civilization developed on basically similar lines before when in reality we started using fossil fuels. Do you really think that 999,999,999 out of a billion possible outcomes for the Earth lacking fossil fuels lead to the extinction of the species before it can reach our exceed our modern level of technology – however long in the future it takes?

      • moridinamael says:

        … yes? At least, technology isn’t just a thing that happens independent of civilizational infrastructure. It’s a hard slog that takes a long time.

        Pick an artifact. Work backwards. Let’s pick a Dell laptop. How do you build a Dell laptop at a reasonable price without the enormous economies of scale and the huge research dollars that go into the dozens of laboratories and universities across the globe? How are all these universities, fabrication plants, and factories powered? Fossil fuels, obviously.

        But it’s worse than that. The Dell laptop is the tip of an iceberg that’s been growing since the start of the industrial revolution. The iceberg consists of both knowledge of physical things like universities, power grids, internet cables. That iceberg has only been able to grow as large as it has due to a steady diet of fossil fuels.

        I am having a very, very difficult time conceiving of an alternative narrative where a society of gentleman-scientist aliens with no civilizational infrastructure (and no fossil fuels) but lots of intellect and curiosity have still managed to build from scratch a Dell laptop for $1000 in their own dollar-equivalent.

        So when you imagine us reaching or exceeding our modern level of technology without fossil fuels, how do you see that happening exactly? How are our universities being built? How are people attending universities, in numbers greater than they were ca. 1600? How are population centers being fed, such that people can form specialized organizations and build technology? How are cities being supported?

        Put another way, if you handed Isaac Newton the blueprints for this Dell laptop along with hundreds of textbooks explicating its underlying principles, maybe he could build it if he marshalled the resources of all of England. The laptop would cost roughly a billion dollars. And there would be little point to turning it on, because there would be no Internet.

        I can admit that on some alien world, things might just be *utterly* different. Like, maybe the aliens are photosynthetic and don’t need agriculture, which undercuts like 90% of my argument. But there are lots of counters to that, like, we know humans are only as intelligent as we are because we’re omnivores.

        Overall, I admit, 1 out of 999,999,999 odds maybe don’t cover it for fossil fuels. However, I am with Histocrat below. Lack of fossil fuels could just be the first Good Filter. Then there’s another Good Filter later. You only need two Good Filters to compose one Great Filter.

        • F. says:

          Initially, transportation of goods, including food, would primarily happen on sailing boats and rafts, like it did for centuries before trains. Cities would be located according to those constraints, around coastlines, rivers, and canals. Like they were for centuries.
          Big cities located on coasts and rivers are easy to feed once you have agricultural surplus. You just feed them the same way the the million (!) inhabitants of ancient Rome were being fed. Boats.

          Keep in mind that most activity would happen in very densely populated areas, comparable to England or Italy or Japan. Do not think of the immensity of the US.

          When biofuel is developed, you get biofuel trains. You only need to sail boats up to that point. Also, electric trains, powered by hydroelectric plants or biofuel plants or nuclear reactors.

        • Anonymous says:

          I’m not really saying you can’t do biofuels, full stop. I’m saying you can’t go from pre-industry to CERN supercollider or Saturn V rocket for that matter without the wiggle room afforded by a true overabundance of energy, in one form or another.

          – moridinamael

        • F. says:

          Nuclear power then.

        • Nestor says:

          I have never seen Isaac Newton so thoroughly underestimated.

          He would absorb the information. He would build a prototype with his own resources.

          It would not look remotely like a Dell.

          It would not have an internet connection.

          It would still be useful despite this.

          He would use it to do things you can’t even imagine.

        • F. says:

          To moridinamael: I really, really would like to stress that I think you underestimate the potential of boats.

          Even mere sailing boats are capable of doing all the ferrying required for the kind of progress that eventually develops nuclear power.

          Also while biofuel trucks may not be practical, biofuel ships certainly are.

        • moridinamael says:

          I admit know very little about the logistics and efficiencies of boats. The first limitation that springs to mind is that you are restricted to doing all your industry around bodies of water, or if you want to move things inland, you have to build an artificial channel which is always going to be far more costly in terms of time and energy than a road would have been.

        • F. says:

          Yes, you’re limited to bodies of water. Human activity takes place where the opportunity is. Without fossil fuel, we would have an industrialization focused around coastlines and rivers. Think of the London – Netherlands – Rhine axis for starters. In fact, you only have to continue to use the naval and riverines lines of communications that were the skeleton of European trade for centuries. Major European cities are already located on rivers.

          When you develop explosives, which are the logical continuation of a trend of improving chemistry which predates the industrial revolution, it becomes easier to dig canals that connect different rivers.

          I’ll also repeat myself (sorry but with such a crowded discussion tree, it might be useful) and point out that boats were capable of feeding the million inhabitants of ancient Rome.

          I believe that without fossil fuel, it would be possible to reach a civilization comparable to that of the late 19th century (factories, telegraph, even radio, scientific research), using simply sailing ships and possibly charcoal powered ships.

          To reach 20th century level, you need machines such as tractors for agriculture. But such vehicles don’t need to be combustion-powered; they can be electric, and supported by a grid of power plants.

          It may sound difficult to develop such an electric grid without fossil fuel. But it only takes time. I’ll leave aside the solar option, even though I think it is a possibilty, and just say that a civilization that already reached late 19th century levels of sophistication, is likely, given enough time, to eventually develop nuclear power. Then you can build nuclear reactors which power electric vehicles and machinery everywhere.

  36. Robin Hanson says:

    The difficulty of thinking of plausible future filters, relative to the ease of thinking of plausible past filters, is indeed some evidence that the filter is behind us. But it is hard to see that we should feel very confident in these difficulty estimates. So we should still put a substantial weight on a future filter. Also, there is a persuasive anthropic argument for future filters: http://www.overcomingbias.com/2010/03/very-bad-news.html
    Oh, and the great filter paper was from ’98, not ’08.

    • Vadim Kosoy says:

      I think the anthropic argument is more than cancelled out by the fact that in early filter universes the impact of your decisions is *much* greater (because it propagates into the future supercivilization). So you should make decisions as if you live in an early filter universe (at least decisions that have long term impact; however other decisions don’t seem to depend on the location of the filter anyway). You might still insist to ask whether we *actually* live in an early filter universe. To this I would say that epistemic questions (as opposed to decision theoretic questions) are meaningless. They are only (approximately) meaningful in contexts in which anthropic considerations are irrelevant, which is not the case here.

      • peterdjones says:

        Ate epistemic qualia4stinks still meaningless if my UF place a really high value on the really true truth.

        • Vadim Kosoy says:

          This doesn’t give an unambiguous answer since you need to specify who exactly you’d like to know the truth and how do you accumulate the truth values over these entities.

        • peterdjones says:

          “Me” and “whatever works”.

          I might need some decision theory to implement that..but.I might need some epistemology to figure out the decision theory…I still don’t get the Boo Epistemology!, Hurrah DT!

    • Scott Alexander says:

      Sorry. I knew that and I’m not sure how that mistake slipped through.

      Katja’s anthropic argument seems to rely on there being logically possible worlds with Great Filters in different locations. How would it deal with the following critique? Suppose it is logically possible that there is no Great Filter. Then that possible world will develop a supercivilization of quadrillions of beings. But if there are supercivilizations of quadrillions of beings somewhere, it is almost inconceivably unlikely that we would end up in a sparsely populated universe like this one. Therefore, there’s something wrong with the idea of different logically possible filter locations plus self-indication.

      • James Miller says:

        Under eternal inflation new universes keep being created at a super exponential rate. So if a universe has 1 billion people at time T=1 and 10^10 billion people at time T=2 you are probably in a universe of just one billion people because there are so many more of these.

  37. Jake says:

    Minor correction: looks like the original Hanson paper is from 1998, not 2008.

  38. Handle says:

    Another possibility is that there is simply an over-optimistic error in our analysis of how feasible interstellar colonization really is, and perhaps there is another over-optimistic error in the estimate of how feasible it is to resolve signals from remote but positionally-constrained civilizations.

    There is a lot of talk in this and other similar posts that simply take it as a matter of fact that it is possible to achieve near-light-speed travel, but fail to account that we haven’t produced any craft with a solar system escape velocity faster than Voyager I’s did decades ago. The rocket equation is a physical law, not some minor and temporary technical obstacle which can be overcome with a little more research and tinkering. You can’t just Science-Fiction-Assumption hand-wave away what might be a impossibly hard constraint.

    I almost never see calculations of (exponential by velocity) immense energy requirements for, say, accelerating a 10 ton object to 0.95c, or the crazy mass ratios of a rocket system that could accomplish such an acceleration.

    Voyager flies at 3AU a year. c is 60K AU/year. If speeds doubled every decade, it would still take 150 years to get near c. But speeds are completely stagnant.

    So any theory of the Great Filter should start by explaining why speeds have been so stagnant in this age of rapid technological progress and deepening experience with space exploration and why we ought to expect this challenge to be consistently overcome some day. I don’t come across any reasoning on that score, just wishful, magical thinking and optimistic assumptions and a faith in the inevitability of ever-increasing progress in capabilities.

    We don’t need to look out into space, the limits are probably right in front of our face, here at home, if only we’ll accept them, bleak as they may be.

    • Alrenous says:

      perhaps there is another over-optimistic error in the estimate of how feasible it is to resolve signals from remote but positionally-constrained civilizations.

      About that.

      Hypothetically, assume that the Arecibo telescope was put on the back of a starship making its way into deep space. If it was possible to point the telescope back towards Earth, how far could the starship travel and still be able to detect terrestrial electromagnetic radiation leaking into space? If we were planning on catching up on our favorite soap, cheering for the home footie team, or grooving to the top ten, the answer is not very far.

      Neglecting atmospheric effects, Table 1 and Figure 3 show the following:

      An AM radio broadcast could only be detected out to 0.0074 Astronomical Units (AU).
      FM Radio could be detected out to 5.4 AU.
      A 5 Megawatt UHF television picture could be detected out to 2.5 AU, although the carrier wave could be detected much further; out to 0.3 light years.
      The Pioneer 11 can only be detected to 120 AU.

      To put this into perspective, consider the following:

      1 light year is equivalent to 63,240 AU.
      Jupiter is 5.2 AU from the Sun.
      Alpha Centauri is 4.3 light years away.
      Vega is 26.3 light years from Earth.

      • Handle says:

        Thanks for the link, that is exactly what I’m getting at. An obvious alternative answer to the question, “Why can’t we hear anybody” is, “Um the laws of physics tell us we just can’t hear that well,” instead of just, “That must mean there’s nobody out there!” I’ve never been satisfied with people hand-waving this away.

        The local galactic area (say, 1000 ly) could be teeming with similar civilizations for all we know, that doesn’t mean we have the capacity to detect their signals, or they ours.

        Furthermore, think about how signals are sent these days. No sender of the strongest signals is sending simply coded analog signals over a single frequency. There is all kinds of digital compression and encoding, DRM and other encryption, and also frequency hopping, combinations, multiplexing and so forth.

        Try extracting a signal from your local EM spectrum from scratch. Good luck deciphering it; it’ll look like complete noise.

        • Jake says:

          The point is that a billion year old species shouldn’t just be leaking out the same sorts of radio waves we are. For one thing if they are colonizers they should be HERE by now, not a few star systems over. For another, they should be dramatically reshaping the structures of their star systems to make maximum use of their resources, which is something we would most likely be able to detect unless it was being purposefully hidden for some reason.

    • F. says:

      Using nuclear pulse propulsion, we could build a craft that reaches the nearest star in only a century. The technology is possible today. We just don’t build such a ship, because it would be overkill to use it to get to Mars, and we have no burning need to get anywhere else.

      There’s also the possibility of laser propulsion.

      http://en.wikipedia.org/wiki/Nuclear_pulse_propulsion
      http://en.wikipedia.org/wiki/Laser_propulsion

      There’s also miniaturization. If the load you need to carry is very small, the vehicle can be small also. You can “seed” a distant star with ridiculously tiny robots which then build larger ones from local materials.

      • Vadim Kosoy says:

        Nitpick: I think it would be currently difficult to work on a nuclear pulse drive since international treaties forbid placing nuclear explosives in space. See also http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion)

      • Handle says:

        The US has been working on nuclear-based propulsion ideas for 60 years and without a single test as even a proof of concept of even thermal designs (which are not banned under treaty – many long term probes run on heat from Plutonium-238), and the Russians say they are working on a version for Mars, but who knows.

        The point is that chemical rockets are a non-starter, and then the question is why should we expect whatever considerations are keeping these non-chemical technologies mothballed and grounded to terminate sometime in the future?

        • F. says:

          The future is long.
          It doesn’t seem realistic to assume that interstellar travel is impossible or almost impossible for advanced civilizations in general, simply because here on Earth in the 21th century we happen not to have the kind of burning immediate thirst for fast travel that would encourage us to iron out the difficulties in nuclear pulse ships.
          From our present day knowledge it seems that nuclear pulse powered, or laser powered, interstellar travel, is feasible. If you argue that there is a fundamental obstacle to interstellar travel, you must provide some evidence for this. That our present day rockets can’t get to the stars, isn’t a good argument, since our present day rockets are designed to get to the planets, not to the stars, and since we already can conceptualize faster ships, and we only don’t build them because, fundamentally, we happen not to really need or want them right now.

    • Anthony says:

      If it takes 100,000 years to develop technology that would allow us to go a significant fraction of the speed of light, we’re still talking a drop in the bucket as far as the Fermi Paradox. That’s less time than it took for proto-primates to evolve sufficient intelligence to do the job.

      Piano estimates (comment at 12:45am) that there could be 700 near-light-speed-expanding civilizations who are unaware of each other, so it is possible that we’re just lucky enough, and we still have a chance to become that expanding civilization, because we’re far enough away from the Great Paperclip Void in Boötes.

      If there is no way to efficiently cause direct conversion of matter to energy, the civilizational speed limit for long travel may be dependent on harvesting your own fuel for fusion (Bussard ramjets); that limit seems to be 0.12c, which makes everything 8 times as far apart. Intergalactic travel may be more limited than interstellar travel because the space is *so* much emptier, so that the real constraint on a civilization surviving is that no unfriendly civilization arises in the same galaxy.

      • Handle says:

        Statements like, “… it takes 100,000 years to develop the technology … ” are part of the problem I’m describing, because they assume that there is some possible technology which can overcome some fundamental obstacle, and it’s just a matter of time before someone will figure it out. There is some kind of rational fallacy at work here. Every technology that has ever been developed and every problem that has even been solved has evolved this way, and everything new that we will have in the future will also come about this way, so we start to expect that every currently nonexistent capability is just somewhere in that future development space.

        My point is that interstellar travel could be like a perpetual motion machine or practical alchemy. These are also things that our predecessors thought they could accomplish with the right about of magic, technology, and cleverness, but they were all trying to do the infeasible, and eventually everyone except cranks rightly gave up trying.

        • Scott Alexander says:

          Interstellar travel is something we could do right now with current technology. It would just take 40,000-odd years. Invent some kind of machine that thaws out and grows frozen human embryos and the problem is solved.

          (or be an immortal civilization, or one that hibernates easily, and again, problem solved)

          (or just send von Neumann probes and don’t worry about life forms, and again, problem solved)

          “Traveling long distances in a medium where zero energy is required to maintain a consistent speed” doesn’t seem like the sort of thing there can be a physical barrier to.

        • Handle says:

          @Scott:

          “Interstellar travel is something we could do right now with current technology. It would just take 40,000-odd years.”

          I call BS. You are assuming too many things you should try to prove first before jumping to this conclusion. Yes, we can probably accelerate something like a time capsule to 6 AU/year (still twice the escape velocity of Voyager I) and point it in the direction of the nearest star with a planet in the habitable zone and wait. Yes, near absolute zero everything should last forever, but it’s not obvious we can design a “keep a lot of stuff near absolute zero for 50K years” integrated system, even in space.

          Even designing a 50Kyear time capsule is tougher than it sounds. But you want more than a time capsule, you want entire functioning systems. How do you know we can make any machine that can actually ‘hibernate’ for 50K years? We don’t have batteries or solar cells that will last that long.

          The ways proposed to get ‘around’ this problem is to conceive of a kind of self-remaking-factory ship, or perhaps a chain of deeply-frozen systems, each one operates until it wears out or accumulates defects, and then it thaws the next one to replace itself. But now you are probably talking about a very large object.

          It is easy to say, “Some civilization will surely bear that enormous cost sometimes,” but how do we know that? What’s the largest amount of GDP that any human civilization ever spent on any ‘cool-factor’ project aside from war?

          Furthermore, let’s say that we figured all this out and started sending little time-capsule-machines colonization probes to the nearest planets at a rate of 1 light year every 10,000 years or so. There is still the problem of picking up signals across such vast distances, and it’s not obvious we can do this even from earth to wherever the nearest candidate planet, where the dispersion and aiming problems are 10 million times worse than for Voyager and Pioneer today. If other civilizations in the universe are doing it that way, we still wouldn’t be able to detect it.

          My contention is simple stated: The minimally effective method of interstellar travel may still be so expensive, difficult, and complicated to achieve and require such vast stretches of time that it’s reasonable to think that no civilization would be willing to decide that it’s worth the economic expenditure to even try, and that even if they did try, we couldn’t detect it.

          After all, we are an advanced spare-faring civilization who has had the technological and economic ability to pursue this objective for a while and we have made the conscious decision to give it a tiny amount of our attention at a stagnation-level of progress for half a century. Why should we assume that situation will change?

        • Alrenous says:

          The likely usefulness of an idea drops exponentially with the number of ungrounded assumptions. If the assumptions stack, after a couple layers then the truth won’t even be on the list of possibilities.

          It can still be a useful exercise in philosophy and logic, but only if all assumptions are explicitly stated so they can be checked for contradictions. Exploring a contradictory idea is impossible, though sadly the brain won’t return an error if you try.

  39. Ken Arromdee says:

    I think that the biggest flaw in the argument is the “it only takes one” part. No, it doesn’t only take one. If the chance that a civilization expands without limit is 1 in a billion, any civilization that becomes such an expanding civilization isn’t the first civilization in existence. All the other non-expanding civilizations will stop the expanding one.

    “It only takes one” only works if
    1) the “one” is the first one, or
    2) the other 999,999,999 in a billion fail because of something which destroys them as a force in the universe entirely (such as extinction), not because they still exist but don’t expand.

    Scott says “you are underestimating billion-year-old universe-spanning superintelligences. Don’t flatter yourself here. You cannot threaten them.” But it makes no sense to say that since you can’t threaten them they won’t destroy you–you can’t threaten them because trying to threaten them leads to them destroying you. That’s what “you can’t threaten them” means.

    • gattsuru says:

      The “Great Filter” argument presumes explosive growth up to a (very) high bound. Your inability to threaten another civilization isn’t just a result of the civilization’s self-defense, but simply because you can never get enough power to do something civilization-destroying to them.

      Let’s say that tomorrow, the Ancient Martian Civilization turns off their cloaking device, says hello, we’re about to take apart Sol for power to handle the first dozen Martian Colonization Projects. Don’t worry, we’ll leave half of the star behind for Earth and do some rearranging, so everything will be fine, but your orbit will shift and Jupiter will look funny and you’ll need to adjust your clocks again. They’re even /nice/ Martians, so they drop a big library of their space-faring technology and a bigger library of all the necessary bits to understand and use the stuff. And they leave some machinery behind, so we can even replicate the project /exactly/ as fast as the Martians did.

      By the time Earth has built its first set of colonization projects, consuming the rest of our star in the process, so are each of the twelve Martian colonies. So we’ve got our first dozen probes at the same time that they’re sending out a dozen probes from a dozen planets. And so on and so on, in exponential growth. We could decide to blow up a single Martian colony, sure — but they’ll simply and vastly outnumber anyone else that follows them.

      It’s similar to an issue in real-time strategy games (see Star Ruler for an exact example), where getting a small time or resource jump eventually creates an incredibly large difference in power.

      You /might/ be able to threaten individual members of a civilization that started before you (though that’s not a given : the gap may as likely be the difference between fish and a human, rather than a chimp and a human), but that doesn’t really get you anywhere or change the Great Filter calculus.

    • MugaSofer says:

      A non-expanding civ cannot stop a later expanding civ, because they have not expanded to reach them.

      You mean that the only-takes-one objection does not apply to hidden-but-expanding civs, which is true, and I don’t think Scott disagrees.

      But that does not undermine the only-takes-one argument in general, only in the specific case of civs that deliberately hide while expanding.

      • Anthony says:

        “A non-expanding civ cannot stop a later expanding civ, because they have not expanded to reach them.”

        Not quite. A (group of) non-expanding civ can prevent the expansion of an expanding civ by destroying its earliest colonization attempts. But I don’t think this can be *the* Great Filter, because it requires too much density to be effective.

    • Jake says:

      The point of “it only takes one” is that there only needs to be one civilization that successfully reaches the ‘colonize the universe’ stage for that civilization to be everywhere. So whatever factor or combination of factors is preventing such civilizations from developing must be very, very effective – on the order of preventing all but an unbelievably small portion of potentially life supporting planets from developing space-faring civilizations.

      You seem to be suggesting that the bottleneck is that only 1 in a billion civilizations choose to expand across the universe. This is one of the options of course, but from what I’ve seen of our civilization and of life in general it seems unlikely that only one out of a billion possible civilizations would manage to explore and colonize the universe beyond it’s homeworld.

  40. Shmi Nux says:

    I have trouble believing that you are one of those with poor enough imagination to apply anthropomorphic reasoning to independently evolved general optimizers. The only thing the lack of visible general intelligences tells us is that life/evolution has a large enough space of possibilities to not get close to the same narrow human-like subspace again.

  41. Histocrat says:

    Doesn’t this analysis stop working so well if you account for the possibility that the Great Filter is actually a disjunction of a few different Good Filters? e.g. any civilization that makes it this far tends to *either* change its own climate disruptively, ascend to a higher plane of existence, or dissolve into grey goo (of a sort that has trouble traveling through vacuum).

    The form of argument you seem to be trying to refute is

    I) It must be really improbable for a spacefaring civilization to develop.
    II) If X is a serious concern, that would make it more improbable.
    III) Therefore, we should raise our estimate of how serious a concern X is.

    But arguing that X can’t account for all of the improbability by itself doesn’t refute this argument.

    • Jake says:

      No one’s necessarily arguing that it has to be a single “Great Filter” that explains everything, but the larger point of figuring out where the bulk of the improbability lies is still important. It’s a big deal if the two most difficult to surmount obstacles are the origin of life and the creation of a technological civilization, or if on the other hand plenty of technological civilizations make it past the various winnowings, but some combination of future threats inevitably brings them down. As we look more on future threats I feel like we need to look more at there being one big thing that takes species down, since otherwise the inevitable differences between different possible species would presumably let some escape problems that bedeviled others, in a way that they couldn’t if there was some universal constraint.

    • gattsuru says:

      In the Milky Way Galaxy alone, there are 2-6 x 10^11 stars. In our Local Group, there are 50 galaxies making up within ten million light-years. If you branch out to the hundred million light-year range, encompassing the full Virgo Supercluster, you’re talking 2-4 x 10^14 stars.

      Whatever the full sum of Filters are, they have to be very, very, very, very^14 effective. The really tricky question is whether they’re evolutionary or physics questions — things that predate us — or social or science questions — which we still have to deal with.

      • Histocrat says:

        even a handful of one in one hundred million chances doesn’t get you there.

        Doesn’t it? If we have two filters, each of which only one in a hundred million (10^8) civilizations passes with independent probability, then the probability of passing both is multiplicative, 1 in 10^16. So if each star spawns <25 civilizations, we'd expect to see <1 spacefaring civilization.

        • Scott Alexander says:

          Agreed, but unless you posit like fifty separate filters, they’re still going to have to all be pretty strong.

          If we need a factor of a billion, then if global warming is a factor of ten, we can still say that’s probably not an interesting contributor.

          Also, I would expect ability to survive filters to be correlated; that is, a civilization that has its act together enough to survive global warming is also more likely to survive nuclear war, et cetera.

        • gattsuru says:

          We’re in a fairly well-established universe, and most proposed Great Filters don’t tie up resources for a large portion of that time period. You can get them all to add (or rather, multiply) up, with about a dozen such chances, but most of the attempts I’ve seen to do so end up with the really unlikely survival chances in the past. Which may be the case, but isn’t terribly interesting or that different from model of one Great Filter in the past.

          And, as Mr. Alexander points out, they’re probably not independent variables.

  42. nicole galpern says:

    could the great filter be a widespread acceptance of anti-natalism or negative utilitarianism? maybe once we’re close to colonizing the galaxy we inevitably reach a point in which we (or AI) think, “oh wait, this is all pointless.” but that would probably fall under “the great filter is not transcendence”.

    • Jake says:

      Yeah like with the transcendence argument, the issue is that only a small part of a civilization needs to accept reject those anti-colonization values, and before long the colonizing part of the civilization will grow larger than their stagnant cousins. Maybe this could be put off for a time by repressive social policies that prevented any group from attempting colonization, but that would mean every civilization manages to either choose extinction or perfect a regime of eternal repression, neither of which seems even all that likely, much less a universal inevitability.

  43. anon says:

    Perhaps the great filter is good old fashioned human ingenuity, and we simply aren’t good at recognizing our own capabilities because they seem so basic to us. Maybe we’re blind to whatever particulars of intelligence or imagination are necessary for us to make it to the moon.

    This anthropocentrism contradicts the moral of practically every science fiction story ever. But maybe such places are the best starting point for figuring out science fiction’s mysteries.

  44. T. Greer says:

    I am surprised that no one has yet mentioned the basic energetic limits facing any exponentially expanding civilization. See Tom Murphy’s two posts:

    “Galactic Scale Energy”
    Tom Murphy, Do the Math (12 July 2011)

    and

    “Exponential Economist Meets Finite Physicist”
    Tom Murphy, Do the Math (10 April 2012).

    You can only bend physics so much…

    • Alrenous says:

      Teasers dude. Sell your links, don’t make them sell themselves.

      Alright, the Earth has only one mechanism for releasing heat to space, and that’s via (infrared) radiation. We understand the phenomenon perfectly well, and can predict the surface temperature of the planet as a function of how much energy the human race produces. The upshot is that at a 2.3% growth rate (conveniently chosen to represent a 10× increase every century), we would reach boiling temperature in about 400 years. [Pained expression from economist.] And this statement is independent of technology. Even if we don’t have a name for the energy source yet, as long as it obeys thermodynamics, we cook ourselves with perpetual energy increase.

      Two possibilities. Either it’s a hard physical limit, or it can be got around with ingenuity.

      If it can be gotten around, it will be by innovating on efficiency rather than harnessing more energy, and an efficient civilization is quiet. As receiver tech improves, broadcasters can use weaker, cheaper signals. It seems that eventually everything is replaced by wires and coherent beams which hardly leak at all.

      • moridinamael says:

        I’ve read assertion that 1 kg of computronium could allegedly perform more computation than all of present day human civilization. Even if this estimate is off by orders of magnitude, it wouldn’t matter, because in the future we could build arbitrary amounts of computronium. The point is basically what you just said, *economic growth* doesn’t mean *chowing down on moar energy.* It’s the physicist who is ignorantly abusing a term of art in that conversation.

        • Paul Torek says:

          Isn’t there a minimum entropic gain to store and retrieve a memory? If so, what would be the minimum power requirement to fuel a human-like (in terms of # of memory operations) thought process?

        • Andrew G. says:

          Energy cost to irreversibly lose n bits of information is nTkB ln(2) where T is the absolute temperature and kB is Boltzmann’s constant (1.38e-23 J/K).

          If we handwave an upper bound for the amount of information lost by a human consciousness as 1e14 bits/sec (1e11 neurons losing 1000 bits/sec each), by my calculations that comes to the convenient number of approximately 1 nanowatt per degree absolute.

        • Paul Torek says:

          Thanks Andrew!

    • Vertebrat says:

      “Every exponential curve is a logistic curve in disguise”, as the saying goes. But this is no impediment to Great Filter arguments; cubic growth is still enough to fill the galaxy in mere millions of years.

    • spandrell says:

      That’s very interesting, and I wonder how it applies to Hanson’s ems theory.

      What’s the projected power requirements to emulate a human brain? If it’s anything close to the actual power consumption of a human brain (and I don’t see why it wouldn’t) then there’s no way we’ll reach the “trillions of people” he always talks about.

      • moridinamael says:

        The power requirements will diminish with time in a pseudo-Moore’s law fashion, just like with everything else. Once the brain is implemented as an algorithm, there is no fixed energy cost.

        Right now it would cost untold megawatts of power, because we don’t have the hardware for it. Give it a couple decades.

        • spandrell says:

          No fixed energy cost? Not my area of expertise but how does that make any sense?

          Surely there’s a hard limit in how many brain emulations can be run simultaneously using all the energy available on earth. The same way you can’t run infinite instances of Photoshop. I’m just asking for a number. 1 trillion? 10?

  45. ADifferentAnonymous says:

    The argument that alien exterminators aren’t the great filter seems to shift without warning into an argument that if alien exterminators are the great filter we can’t do anything about it, without ever really making the original case.

    Good news is that they don’t have to be exterminators; expansion-suppressors could explain our observations well enough.

  46. Will Newsome says:

    Alien AIs already swept past. Non-interference or close to it is the obvious equilibrium in an economy of AIs with diverse preferences and approximately zero need for human resources. http://computationaltheology.blogspot.com/2012/05/superintelligent-solution-to-fermi.html Out of a hundred AIs a single AI with a strong preference that emerging civilizations not be fucked with would have sufficient bargaining power to ensure its desired outcome just by trading small black hole access rights http://lesswrong.com/lw/12v/fair_division_of_blackhole_negentropy_an/ , which are way more valuable to generic self-interested AIs than a few thousand measly planets. In practice non-interference might not be strict, with AIs allowed to interfere with civilizations’ affairs under agreed-upon conditions, e.g. where there is sufficient plausible deniability. Anyway, seriously resource-intensive operations happen near black holes http://www.weidai.com/black-holes.txt . Alien AIs aren’t worried about letting stars apparently burn down, once the aliens have swept past they can reverse time itself if need be (engineered quantum recoherence + reversible computing). This kind of explanation is the only one I know of that really heeds the Copernican principle.

    • Will Newsome says:

      I try ^_^

    • endoself says:

      What? This is really straightforward; he’s stating a bunch of things that he usually leaves implict. Did you read all the links?

    • moridinamael says:

      This would require that the AI sweep past pretty much at the narrow moment that we are emerging as a sapient species, and not, say, ten million years ago.

      Even if we assume that the AI did arrive at the Sol system and notice Homo Sapiens, and find us valuable, why not accept that the superintelligence did eat us up, and we ARE all living in the counterfactual simulation? And we would never be able to disprove this.

      I mean, if *I* were uplifted to superintelligence and my current 2014 values were frozen exactly as they are, and I started colonizing the universe, and I stumbled upon some aliens, my general policy would probably be something like “I’d better simulate these little guys for safekeeping, and so I can make sure nothing bad happens to them!” and then, yknow, grind their physical bones to make my bread.

      • Will Newsome says:

        If the policy is just “leave all potentially life-bearing planets alone” then it doesn’t require any special timing. I’ve changed my mind since I wrote my Computational Theology post, I now think there’s even less incentive for AIs to eat planets. That said, if AIs do want to eat planets, then I would expect sentient-life-protecting AIs and thus humanity to be compensated in the form of simulations, like you say. Eating and then simulating planets might piss off some AIs though, and the benefits of eating habitable planets don’t seem to be worth that risk, IMO.

        One question I occasionally ponder is whether AIs will care way more than humans do about precise details of phenomena, perhaps even at the quantum level, such that eating planets wouldn’t be worth the information lost, i.e. the extremely precise details of how the planets would have evolved naturally sans eating. But there might be ways to completely solve that problem that also allow for eating.

        • endoself says:

          What you need to do in order to have engineered quantum recoherence depends extremely sensitively on initial conditions. If they can collect enough information about us to reverse time, then they have enough information to simulate us.

    • Scott Alexander says:

      This is just about what I think on this problem too. Except I think this time I might have successfully out-crazied you. I am becoming more and more attracted to the theory in this short story I wrote in 2010.

  47. Ronak M Soni says:

    May be repeating previously refuted shit here, but the great filter may very well be things like inherent limitations on speed of space travel or impossibility of emergence of life till some point in the past too recent in the space-colonising timescale.

    These are the first things that pop to my mind. I’m not sure what the correct rationality methods for considering inscrutable probabilities or unknown hypotheses (that’s what the great filter is – a placeholder for ‘there may be something here’) are, or inddeed if they’ve even been developed, so I won’t extend this to stating correct opinion.

  48. Marcel Müller says:

    There is a common assumption in most articles about the fermi paradox which is also made here (without being stated): Our kowledge about phyisics (and by extension future technology) is more or less complete.

    I think it is quite likely, that the idea of an an advanced civilisation that harvests all matter/energy in it’s lightcone is nearly as off as the idea of aliens farming humans for food or stealing our water. (I will not speculate on computation in purpose build pocket universes or reversible computation in the dark matter or something like this, since it would probably be even more off.) Possibly we simply do not have the slightst idea how a civilisation a thousand, a million or a billion years down the line looks and drawing conclusions form their projected behavior is entirely futile.

  49. Pingback: Alexander Kruel · Miscellaneous Items 20140602

  50. Matthew says:

    Completely forgot to link to this. Recommended. The Fermi Paradox is Our Business Model

    Scott probably won’t check this thread now; somebody please bring this to his attention.

  51. Stuart Armstrong says:

    >That means it can’t be the Great Filter, or else we would have run into the aliens who passed their Kyoto Protocols.

    It’s not XOR – Xrisks can perfectly well slice off a portion of probability space, with earlier barriers slicing off more.

    Incidentally, our paper shows the “ease” of crossing between galaxies, making the Fermi paradox a few million times stronger: http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf

  52. Pingback: Miscellaneous Gadgets 20140602 | JanNews Blog

  53. Pingback: Simulations and the Epicurean Paradox | The Rationalist Conspiracy