I’m late to posting this, but it’s important enough to be worth sharing anyway: Sandberg, Drexler, and Ord on Dissolving the Fermi Paradox.
The Fermi Paradox asks: given the immense number of stars in our galaxy, for even a very tiny chance of aliens per star there should be thousands of nearby alien civilizations. But any alien civilization that arose millions of years ago would have had ample time to colonize the galaxy or do something equally dramatic that would leave no doubt as to its existence. So where are they?
This is sometimes formalized as the Drake Equation: think up all the parameters you would need for an alien civilization to contact us, multiply our best estimates for all of them together, and see how many alien civilizations we predict. So for example if we think there’s a 10% chance of each star having planets, a 10% chance of each planet being habitable to life, and a 10% chance of a life-habitable planet spawning an alien civilization by now, one in a thousand stars should have civilization. The actual Drake Equation is much more complicated, but most people agree that our best-guess values for most parameters suggest a vanishingly small chance of the empty galaxy we observe.
SDO’s contribution is to point out this is the wrong way to think about it. Sniffnoy’s comment on the subreddit helped me understand exactly what was going on, which I think is something like this:
Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilizations. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?
No. In this case the mean is meaningless. It’s not at all surprising that we see zero alien civilizations, it just means the coin must have landed tails.
SDO say that relying on the Drake Equation is the same kind of error. We’re not interested in the average number of alien civilizations, we’re interested in the distribution of probability over number of alien civilizations. In particular, what is the probability of few-to-none?
SDO solve this with a “synthetic point estimate” model, where they choose random points from the distribution of possible estimates suggested by the research community, run the simulation a bunch of times, and see how often it returns different values.
According to their calculations, a standard Drake Equation multiplying our best estimates for every parameter together yields a probability of less than one in a million billion billion billion that we’re alone in our galaxy – making such an observation pretty paradoxical. SDO’s own method, taking account parameter uncertainty into account, yields a probability of one in three.
They try their hand at doing a Drake calculation of their own, using their preferred values, and find:
N is the average number of civilizations per galaxy
If this is right – and we can debate exact parameter values forever, but it’s hard to argue with their point-estimate-vs-distribution-logic – then there’s no Fermi Paradox. It’s done, solved, kaput. Their title, “Dissolving The Fermi Paradox”, is a strong claim, but as far as I can tell they totally deserve it.
“Why didn’t anyone think of this before?” is the question I am only slightly embarrassed to ask given that I didn’t think of it before. I don’t know. Maybe people thought of it before, but didn’t publish it, or published it somewhere I don’t know about? Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument. Maybe nobody took the Drake Equation seriously anyway, and it’s just used as a starting point to discuss the probability of life forming?
But any explanation of the “oh, everyone knew this in some sense already” sort has to deal with that a lot of very smart and well-credentialled experts treated the Fermi Paradox very seriously and came up with all sorts of weird explanations. There’s no need for sci-fi theories any more (though you should still read the Dark Forest trilogy). It’s just that there aren’t very many aliens. I think my past speculations on this, though very incomplete and much inferior to the recent paper, come out pretty well here.
(some more discussion here on Less Wrong)
One other highlight hidden in the supplement: in the midst of a long discussion on the various ways intelligent life can fail to form, starting on page 6 the authors speculate on “alternative genetic systems”. If a planet gets life with a slightly different way of encoding genes than our own, it might be too unstable to allow complex life, or too stable to allow a reasonable rate of mutation by natural selection. It may be that abiogenesis can only create very weak genetic codes, and life needs to go through several “genetic-genetic transitions” before it can reach anything capable of complex evolution. If this is path-dependent – ie there are branches that are local improvements but close off access to other better genetic systems – this could permanently arrest the development of life, or freeze it at an evolutionary rate so low that the history of the universe so far is too short a time to see complex organisms.
I don’t claim to understand all of this, but the parts I do understand are fascinating and could easily be their own paper.
I fully agree with using probability distributions rather than point estimates, but I wonder if there are anthropic effects. Say among the “multiverse branches” with any life, 50% have exactly one technological civilization, 40% have ten, 9% have a thousand, and 1% have a million. It seems like there would still be far more observers that are among lots of other civilizations than those that are alone.
It may be that there is some irreducible complexity of a minimal viable entity that can undergo evolution. In that case abiogenesis would be an extremely unlikely event in any strand of multiverse.
Yup. If universes vary in # of civs, then observers will mostly find themselves in universes with a large # of civs.
This is also a strong argument for a creator (god/simulator/whatever); even if they are rare, they will disproportionately create universes with lots of civs, and therefore most observers will be in created universes.
Of course, we also have to condition on not seeing anybody else.
I like the idea that you’re either in a universe where you’re first, or you’re in a universe where you’re so far behind that you’re kept isolated from the higher sapiences as a kind of zoo/sapiopromorphic exhibition. The same way sophisticated human civilizations endeavour to keep isolated tribes isolated.
Interestingly, if you think it’s more likely that sapience-dense universes are driven by dark forest logic, and whoever is first wipes out all budding competing intelligences, this massively increases the probability we’re alone. Otherwise we probably wouldn’t have made it this far.
I saw a talk a while ago on youtube (wish I could remember where) arguing that such behavior would be grossly immoral, precisely because it gives us such an incorrect picture of reality. The ideal of letting us develop along our natural path is a sham because of all the false conclusions we would draw about the nature of the universe and our place in it.
I think the SDO gets closer to the truth than the Drake Equations, but just needs a hair of improvement: Specifically, what are the probabilities that > 1 alien civ exists, but also have a chance of detecting each-other prior to one or the others extinction? Let’s go with orangecat’s suggestion of 90% that it’s less than 10…
We may be assuming or relying too much on the idea that conventional radio communication would be the primary means of interstellar communication and is the best “smoking-gun” type of evidence of advanced alien civilization when EM based communication may just be a short technological bridge. Assuming that most civilization uses conventional radio communication for 1000 years of their history, that’s a brief blip compared to the history of the universe, and would require the listening civilization to develop their own radio communication within a specific 1000 year blip, and also be in close proximity (maybe 1000 light years) so that the two civilizations could still be at concurrent stages of development.
There’s at least evidence to suggest conventional radio is a small blip in the technological development of a civilization (much like the idea that Freeman Dyson had suggested about Fossil Fuels being the initial *spark* and moving towards nuclear/renewables). Advancements in OAM multiplexing would mean that signals at certain frequencies could potentially look no different than solar flares (a sudden build up in energy at certain spectra) but in fact contain multiple streams of data at different orbital frequencies and are indecipherable with conventional antennae. We’ve also begun development on using neutrino beams as a means of communication. I wouldn’t be surprised if advanced civilizations switch to neutrino transmitters to avoid EM interference for interstellar probes that get sent beyond a planet’s heliosphere, and reserve low power EM for wireless communication on a homeplanet that would be too low compared to background noise for something like SETI to pick up.
Honestly the lack of radio transmissions from aliens seems like a red herring with regards to the fermi paradox because the real issue is not seeing massive areas of space where all the stars have been disassembled or enveloped with dyson swarms.
At an advanced level of tech you really only need in theory a single person who wants to send out von-neumann probes for the civilization to become extremely obvious pretty quickly (on cosmic timescales).
Similarly there’s also pretty good reasons you’d want to extinguish/surround stars as well, since otherwise all that energy is mostly just getting permanently lost as extremely dispersed unusable light in interstellar space and speeding up the heat death of the universe.
True, but can we even quantify how much time it takes for a civilization to even reach that level of technology (Dyson sphere or massive von neuman probes)?
I’m skeptical of any type of full scale Dyson sphere eclipsing the full flux from a star, since stars need to be sufficiently large to be self sustaining, (for example in our case, our sun has 12,000 times the surface area of Earth) and so even boot-strapping something like a Dyson swarm that could absorb 1% of our Sun’s energy as an array of 1m thick panels containing absorber, converter, and storage would end up requiring 5.1 * 10^14 cubic meters of *stuff* or strip mining our entire planet (Like, every damn mountain!) along with purposing an enormous amount of asteroids, never mind all the energy needed to assemble and transport those materials.
The question basically is: how much surface area could we expect a star to be covered based on the age/tech level of an alien civ? Given those numbers, how easy can it be to detect a swarm if 1%, 0.1%, 0.01%, 0.001% of a star’s flux is less than what is visible in all spectra other than infrared?
As for Von Neumann probes, it’s possible we’ve already been visited by them, and/or they have limitations to their replication protocol. Dyson’s proposed size for his “astro-chicken” was 1kg, but how much matter would it take to make a self-replicating astro-chicken-factory? At this moment, we could say that human civilization *itself* is a von neumann probe factory bootstrapping itself 😛
Our calculation in “Eternity in six hours” showed that self-replicating factories could build a Dyson sphere out of Mercury in 40 years. Building Dyson spheres in a linear fashion takes forever, but self-replication makes it very fast on astronomical timescales.
Enjoyed it. Stumbled across three typos:
just before 5.1: “comically insignificant” (unless that’s a joke)
section 7: “civlizations”
last paragraph: “likeliehood”
A very important result; thank you for your writeup!
HTML typo: “…explaining the formal probability argument. Maybe the Dark Forest trilogy.”
Also, would you mind labeling the graphs better? I’m guessing the N is something like “number of civilizations / number of stars,” which means that the area to the right of the red line represents occurrences of “More than one civilization in the Milky Way in our simulation,” but I’m not sure there.
Yep. Neat!
Also, on the typos topic, the word “in” after the word “hidden” before “the supplement.”
As far as I can tell, the x-axis of the second graph should be labelled “x” rather than “N”. (Though I’m sure this is copied from the paper, and not Scott’s work.)
Yeah some more detailed explanation of the graphs would have helped me a lot.
I’ve always privately resolved the Fermi paradox in my head by figuring that if a species becomes intelligent enough to send radio signals, it’ll probably quickly either destroy itself or take over the universe (the “all-or-nothing” assumption), making it so that there’ll only ever be one intelligent species around at a time. So maybe it’s not surprising that we’ve found ourselves to be alone.
But I’ve never bothered to think carefully about it, and it’s possible that that reasoning is confused. For example: suppose you only think the all-or-nothing assumption is true with some probability. Then, if you observe that there are no aliens who’ve sent you radio signals, should you decide that it’s therefore more likely we’ll one day either destroy ourselves or take over the universe? That seems pretty counterintuitive to me…
I think the standard retort to that is that the universe seems quite old right now, such that if it were easy for planets to generate life we should be surprised that we were the first.
Well, unless it’s way easier to destroy yourself than to take over the universe, and we’re not the first. That would suggest that we should be pessimistic about our own prospects.
This whole line of reasoning feels so weird to me though—using apparently-irrelevant information about space to try to predict our own future—that I don’t seriously subscribe to it. Or maybe it’s just my bias towards optimism. 🙂
Ah, I clicked the link in the post to your previous post about the Great Filter. Yeah, it seems like this recent paper is providing some evidence against the Great Filter by arguing for a valid alternative explanation to why we’re seeing no aliens.
Well, no, not quite — It’s still clear that there are plenty of filters, some greater than others. But it raises my belief that the biggest filters are in our past — those are the ones for which they reported the largest uncertainty.
Its not obvious to me that this does anything to the paradox at all.
Seems like the most coherent way to think about it, is that whatever filters are out there, are basically the aforementioned ‘God’s coinflips’
If you don’t make it past the coinflip, your distribution is 0
A drunk physicist told me that that old planets have the following advantages:
1) The ratio of U235 to U238 gets lower, making nuclear weapons easy to manufacture on young planets.
2) The ratio of heavy water to normal water gets lower, making water unable to sustain a runaway fusion reaction. (Humans tested underwater nuclear weapons before we realized this)
Does anyone know if this is reasonable?
The first definitely is, the second less so but still technically accurate. It’s a question of averages, though, so while yes there’s less nuclear material on an older planet its unlikely you’ll get to a point where there’s none before its parent star leaves the main sequence. And you don’t need a huge volume to make enough weapons to wipe out a civilisation
Even on a planet around a red dwarf star where it could have a stable biosphere after billions of years, warlike inhabitants could synthesize their radiologicals in particle accelerators. It would be more expensive, but cost is no real barrier when it comes to weapons
The asteroid that wiped out the dinosaurs was equivalent to around 100 million megatons, the entire global nuclear arsenal is 10,000 megatons.
Some humans would survive the dinosaur killer.
It’s not that easy to permanently wipe out a technological civilization with nukes.
I think the second one doesn’t work because the estimates I’ve seen is that you would need a 20 million megaton nuke (200 million times more powerful than the Tsar Bomba) and deuterium concentration 20 times higher for the oceans to sustain a runaway fusion reaction:
http://blog.nuclearsecrecy.com/2018/06/29/cleansing-thermonuclear-fire/
Not sure how high the deuterium concentrations get on newer planets but unless they are many orders of magnitude higher again, then the aliens are going to have to be really reckless to build an ocean igniting bomb before they understand the danger.
For number 1, it seems to check out physically, but I don’t think it really matters with respect to how easy nukes actually are to make. On Earth, we were already using reactor-bred fuel at about the same time we were using natural fuel (uranium vs. plutonium bombs), and there’s at least one path that lets you breed weapons-grade fuel from thorium instead.
Number 2, I don’t think that’s an issue? Even if there were an exothermic fusion reaction from plain water to some other atomic mix (there probably is even with plain light water), I think the temperature required to sustain would be higher than is compatible with a self-sustaining reaction on a planet. It’s not like combustion, where it’s entirely possible for matter at that temperature to just hang around waiting to burn for a while; nuclear-reaction temperatures are pretty much incompatible with matter remaining in the same place unless you’ve got enough to gravitationally confine it (which would need greater than a planet’s mass). The expected result from any fusion of (light or heavy) water, from whatever trigger, is that the fuel would all explode out away from the reaction site and quickly lose heat until it was below the necessary temperature.
If you go back far enough, you don’t even need to enrich uranium to get a chain reaction.
There are U238 deposits in Africa that were once a natural nuclear reactor.
https://en.wikipedia.org/wiki/Natural_nuclear_fission_reactor
Is the universe that old though? It’s only about 14 billion years old and can potentially support life for probably another trillion years. I’d say it’s in its infancy. It also seems like we’re in the first generation of stars that would be able to create life.
The traditional formulation of the paradox is that, while the galaxy is arguably young, it’s plenty old for a spacefaring civilization to have spread over it if there was one.
On Earth, there’s no obvious reason why intelligence couldn’t have arisen, for instance, in the age of the dinosaurs (say a hundred million years ago), and guesstimates for how long it would take such an intelligence to settle the entire galaxy (e.g. in this paper) range from one to ten million years.
It is interesting, though, that the Drake Equation doesn’t have an explicit term for the age of the universe. The paper observes that the last term L (length of time over which such civilizations release detectable signals) is currently bounded by the age of the universe.
I saw a paper a few years back that I cannot currently find that suggested that while yes, the galaxy is “old enough,” the younger stars have only just died off. These are the ones that are large, burn hot and fast and die in a supernova that releases enough high energy radiation to sterilize volumes of space hundreds (or even thousands) of light-years in diameter. When figuring out how much time a galaxy has had to evolve complex life, this should be taken into account.
Also, our solar system has the benefit of residing in the far less dense spiral arms of the galaxy.
This sounds reasonable to me. Maybe there is some sort of catastrophe (could be man-made, could be a nearby supernova, could be some terrible solar storm), which happens no more than once every few centuries and only affects technologically advanced civilizations, while leaving more primitive ones relatively unscathed.
Earth people have only been technologically advanced for around 200 years, it’s hard to tell what will happen next. Maybe we’ll soon find out what the catch is.
It’s not straightforward, because all elements heavier than helium are formed in stars, so first generation stars didn’t have planets. The Sun has an expected lifetime of 10 billion years and the age of the universe is 14 billion years. Life on Earth evolved about 3.7 bya, so naively that may have been reasonably early.
That said, hotter stars have shorter lifetimes, and they are particularly good sources of heavy elements, so my best guess would be that there have been planets for nearly 10 billion years.
Also, it doesn’t feel like the path from abiogenisis to humans was optimal. We definitely know that life supporting planets have existed for 3.7 billion years, which seems like it should have been plenty of time for an advanced civilisation to evolve.
“so first generation stars didn’t have planets. ”
is it impossible to have a gas giant that’s all hydrogen and helium?
I don’t think that is known for certain, but last time I checked the consensus was that for anything less that a star you probably need an initial rocky core to begin pulling in the hydrogen and helium.
It’s been a long time since I studied this, but I think brown dwarf stars can form from hydrogen and helium, i.e. the star forms as usual but the mass is insuffient to ignite. A brown dwarf in a binary system is effectively a gas giant.
Be that as it may, a planet composed solely of hydrogen and helium could not support life.
No. You need to be able to cool efficiently. Gravitational instabilities can create objects ~7Jupiter masses with solar metalicity. However, this only applies to the very first generation of massive stars which have exceedingly short lives (<1Myr). The main sequence lifetime of intermediate mass B stars is less than a Gyr. We have had sufficiently steady star formation in our galaxy that there continuous formation of planetary systems around solar like stars for the past ~12Gyr.
Is there any reason to assume humanity is not the first species in the universe to meet all the conditions required for a civilisation detectable in space? Both you and Robert Jones significantly use “seems” to justify a position the universe is quite old and should therefore have produced intelligent life before, which is an indication of a totally impressionistic viewpoint (I’m not claiming this is irrational, just as far as I can see on an unsubstantiated reaction to a big number).
Empirically we have a limited data set for how long the universe takes to produce life capable of sending signals beyond their own planet, and that figure is 14 billion years or so. Whatever it seems like, we know this figure applies to us, and therefore is a realistic estimate for a minimum period of time required for such a lifeform to appear.
This obviously assumes Earth has been an ideal environment for producing life capable of sending signals beyond their own planet. But unless someone can show this is not the case this seems like a reasonable assumption to make in light of of our very limited information, and pretty much a certainty for the small area of local space where we have been able to scan for radio waves.
There seems to be a reluctance in human intellectual endeavours to consider ourselves uniquely special, which might relate to the detaching of science from religion (which does normally consider humans to be uniquely great and wonderful). But an assumption of human uniqueness is a position that actually fits the evidence we have best at the moment. We might not be unique or the first, it being a big universe, but the best hypothesis has to be we are in the first generation of species detectable from other planets.
Watchman, there’s no reason to assume that humanity is not the first, in the sense that there’s no paradox. It’s completely consistent with what we know for civilisations at least as advanced as ours to be super-rare, so the expected number per galaxy (or universe) is 1 (or less).
What would be hard to credit would be that the Drake equation yields an answer much larger than 1, but that of millions of civilisations in our galaxy, ours happens to be the first to have become capable of sending signals beyond its own planet. What prevented dinosaurs from developing a civilisation as advanced as ours? It looks like it’s just chance, so if there were millions of planets on the same general trajectory as ours, some of them would have had intelligent dinosaurs (so to speak) and would have been sending radio signals millions of years ago.
This whole area is really impressionistic. If you want to be rigorous, you just have to throw your hands up and say we have no idea.
I suspect that you may be right the real “paradox” stems from Copernican modesty, because it requires us to accept that Earth is an exceptional planet in one way or another (either because it is unusual in possessing the pre-requisites for intelligent life or because an extremely unlikely event happened here).
I like the idea of Copernician modesty. Wonder why I’ve never encountered it before.
I’m not so sure that it’s just luck dinosaurs didn’t create an advanced civilisation though. It might be a practical constraint such as the fact grass (with its key role in producing surplus food and energy) had not evolved, to pick out one single speculative factor (and one could, if so inclined, then presuppose that dinosaurs were necessary for the evolution of grass for some reason…). Luck would only apply if the conditions for that luck to produce an advanced civilisation were in place, and (since there was a lot of time for dinosaurs to get lucky in) it seems better to suggest that dinosaurs suffered from some constraint(s) that stopped any potential lucky break towards advanced civilisation happening.
To defend my hypothesis here more directly there is a technical possibility that an earlier dominant species might achieve an advanced civilisation but as there is no evidence for this it is better to suggest advanced civilisation only became possible at the point it did. This requires the belief that civilisation will develop as soon as conditions are right, but as one condition for civilisation (as far as we know) is the have a species as adaptable and opportunistic as humanity then that seems a reasonable assumption.
I’m not going to suggest this is definitely right, but the assumption that advanced civilisation is not possible before a certain amount of conditions have been met, and that humanity is at the end of a process that has met those conditions as quickly as possible, seems reasonable.. And it allows us the wonderful answer to the where’s all the aliens question of ‘just where we are, wondering where all the aliens might be’.
It is worth noting that we apparently don’t have great evidence that the dinosaurs didn’t create a sophisticated technological civilization.
https://www.theatlantic.com/science/archive/2018/04/are-we-earths-only-civilization/557180/
This is an article by one of the two authors of a paper looking at whether we’d be able to identify evidence of advanced technological civilizations in our own past; the other author is the head of Goddard Institute for Space Studies, so it’s not like this is cranktown. Although they do refer to it as the Silurian Hypothesis, because why wouldn’t you?
Actual article here: https://arxiv.org/abs/1804.03748
The general answer is that it would be very tough to identify the existence of even a global industrial civilization that lasted for a hundred thousand years, let alone the brief couple of centuries like ours.
The specific answer is that you might be able to work it out by finding the geological evidence left behind by releasing a bunch of fossil carbon into the atmosphere and causing drastic climate change; however, they also point out that if a species were capable of surviving as an industrial society for more than a few hundred years, they probably found a way to do it without wiping themselves out with climate change, which would therefore reduce our ability to detect them.
A couple thoughts on “Prior Earth Civs”:
1.) I’m kind of undecided if it’s possible to detect such a past civ or not. But I’d say if it got to 1960s level we should be able to observe whatever it left behind on the moon and further we should be able to detect radioactive isotopes and the like.
2.) There’s possibly another filter that goes like this:
“Humans needed fossil fuels to bring about the industrial revolution. If there is a first civ that uses all the fossil fuels (and possibly metals like iron) then no other civs can arise until those fossil fuels regenerate. Meaning that if there is a nuclear or other apocalypse, planets can only support an Industrial Civilization once every ~500 million years (or however long it takes the oil to come back)”
I think this isn’t entirely true. I figure, you know, 18th century England/France was a pretty scientifically advanced place. They’d probably eventually work out the math behind solar or nuclear power. Though it could delay them by a thousand years or so. Still a thousand years delay is a much smaller delay than 500 million years.
I think that you could identify any civilization on a similar scale to ours through graveyards. If you have billions of individuals being ritualistically buried across the globe over thousands of years then they ought to be over represented in the fossil record, you would at least expect to find skeletons near impressions of their technology.
Fossils are mentioned in the paper. The issue is that fossilization is really, really ludicrously rare, and only occurs in the exact right circumstances. Almost nothing actually gets fossilized.
I recall reading something (which source I of course can’t now recall – maybe ‘A Short History of Nearly Everything’ but maybe not) that mentioned how we have a really skewed idea of Earth’s past because of the fossil record. The great majority of organisms that ever lived left no fossils, and so we have this incredibly tiny sample from the ultra-rare instances where it did occur, and that lets us imagine we know what was going on in the past.
The example they gave would be if a civilization of intelligent beings living on Earth a few hundred million years from now thought they had a good grip on what life was like in our time, because they had discovered fossils of a housecat, a mastodon, and a tricerotops.
Fossils basically give us an idea of which animal fell into exactly the right patch of mud every few million years, and that’s about all. In the hypothetical example of a species trying to deduce our existence in a hundred million years, the odds they would find any fossil remnant of our civilization are very small. (In fact, the authors there argue that even if our industrial civilization exists for another hundred thousand years, we will probably be hard to find from the fossil record – let alone our mere few-centuries blip.)
I do like the authors there somewhat cheekily pointing out that there is at least one period in Earth’s history where it actually does appear that all the fossil fuels got somehow removed from the ground, burned, and caused a period of massive global warming. They are very careful not to actually claim that this is because of dinosaurs driving trucks, but, you know, their whole paper is basically arguing that we don’t have any particular reason to think it wasn’t, either.
Wouldn’t “truck fossils” or unusually shaped iron be pretty easy to spot too? Again, especially on the Moon.
So if they got to 1950s level. However anything before Sumerian level would be pretty tough to spot still.
I mean like animal fossils are easy to miss because they’re not that big. But Naval Cruisers are sort of gigantic.
Or they can build a civilization using biofuels or wind/hydro power or whatnot, which is almost certainly possible for humans and even more certainly possible for some non-trivial subset of aliens living on the set of all possible life-bearing worlds. Worst case, industrialization is an evolutionary change over a millenium or so rather than a revolutionary one over a century, which is hardly a disadvantage in this case and may be advantageous in terms of favoring social stability and long-term planning.
I don’t think “everybody screws up the first time and is then condemned to the preindustrial dark age for ever and ever” is a terribly good candidate for the Filter. But, with this mathematical formalism, we should throw it in the mix as another possibility.
For humans you are still talking about (estimates) of about 5 million people alive prior to agriculture to 7 billion alive today. Civilization can handle 1,000 times as many individuals as proto civilization for humans, so if it lasts even for 1/10th of the time any fossil you found would be 90% (assuming a lot to get that number about distribution, I know) from the civilized era. That also doesn’t note that many earlier fossils will be found with tools and other signs of proto-civilization.
Sure, most species don’t leave fossils but most species don’t span the globe like humans, while practicing ritualistic burial (which I view as highly likely to be a component of any civilization for various reasons) and also making untold numbers of durable artifacts.
There is also a large distinction between an inaccurate view of the past, and a lack of any knowledge. If some future species only finds remains of cell phones they might have no idea what our civilization looked like or what they are for, but they would still be strong evidence that our civilization existed.
The claim being made is that those durable goods aren’t durable in the timescales we’re talking about. Even skyscrapers, aircraft carriers, and other large structures will be completely eroded and leave no sign other than faint chemical residue in a geological strata in a matter of a few hundred thousand years – let alone cell phones and graveyards. Over a timescale of millions of years, everything is reduced to molecular-scale dust, except for the vanishingly small percentage of objects which just happen to wind up in precisely the right environment to preserve them, or to leave the right imprint in a patch of mud and get filled in with a sedimentary rock, which someone is then lucky enough to find.
Basically, there’s a really good chance that in any given million year period of history, there might be only a handful of fossils made, if any; you can fit all of human civilization from the first flint-knapped tool in Africa to the moon landing into the gaps in the fossil record and still have room left over for the new few hundred thousand potential years of civilization.
The point about relics on the moon is a good one – stuff up there would last a lot longer. On the other hand, it would get buried over a long enough period of time. A dinosaur lunar module buried under five hundred million years of moon dust might still have evaded notice.
Which percentage isn’t actually vanishingly small, because check out all the museums full of impressively un-vanished dinosaur fossils (and their basements full of even more numerous but less impressive fossils).
If a tree can be fossilized, so can a wooden ship. And the lifespan of a wedding ring dropped in the mud might plausibly be measured in eons. Has anyone actually done the math on the probability of a civilization’s worth of artifacts entirely vanishing, or is it all just “steel rusts, mumble mumble Ozymandias” handwaving? If there’s math, a pointer would be genuinely appreciated.
Did some Googling and found this – http://www.bbc.com/future/story/20180215-how-does-fossilisation-happen
(It references Bryson so I think it must indeed be A Short History of Nearly Everything I am fuzzily remembering.)
Every fossil is a small miracle. As author Bill Bryson notes in his book A Short History of Nearly Everything, only an estimated one bone in a billion gets fossilised. By that calculation the entire fossil legacy of the 320-odd million people alive in the US today will equate to approximately 60 bones – or a little over a quarter of a human skeleton.
But that’s just the chance of getting fossilised in the first place. Assuming this handful of bones could be buried anywhere in the US’s 9.8 million sq km (3.8 million square miles), then the chances of anyone finding these bones in the future are almost non-existent.
Put another way – we have a few thousand dinosaur fossils – and dinosaurs were a globally extant form of life for more than a thousand times longer than our species has existed (let alone our cell phones).
Just for completeness’ sake, here is the bit I was misremembering:
IT ISN’T EASY to become a fossil. The fate of nearly all living organisms-over 99.9 percent of them-is to compost down to nothingness. When your spark is gone, every molecule you own will be nibbled off you or sluiced away to be put to use in some other system. That’s just the way it is. Even if you make it into the small pool of organisms, the less than 0.1 percent, that don’t get devoured, the chances of being fossilized are very small.
In order to become a fossil, several things must happen. First, you must die in the right place. Only about 15 percent of rocks can preserve fossils, so it’s no good keeling over on a future site of granite. In practical terms the deceased must become buried in sediment, where it can leave an impression, like a leaf in wet mud, or decompose without exposure to oxygen, permitting the molecules in its bones and hard parts (and very occasionally softer parts) to be replaced by dissolved minerals, creating a petrified copy of the original. Then as the sediments in which the fossil lies are carelessly pressed and folded and pushed about by Earth’s processes, the fossil must somehow maintain an identifiable shape. Finally, but above all, after tens of millions or perhaps hundreds of millions of years hidden away, it must be found and recognized as something worth keeping.
Only about one bone in a billion, it is thought, ever becomes fossilized. If that is so, it means that the complete fossil legacy of all the Americans alive today-that’s 270 million people with 206 bones each-will only be about fifty bones, one quarter of a complete skeleton. That’s not to say of course that any of these bones will actually be found. Bearing in mind that they can be buried anywhere within an area of slightly over 3.6 million square miles, little of which will ever be turned over, much less examined, it would be something of a miracle if they were. Fossils are in every sense vanishingly rare. Most of what has lived on Earth has left behind no record at all. It has been estimated that less than one species in ten thousand has made it into the fossil record. That in itself is a stunningly infinitesimal proportion. However, if you accept the common estimate that the Earth has produced 30 billion species of creature in its time and Richard Leakey and Roger Lewin’s statement (in The Sixth Extinction) that there are 250,000 species of creature in the fossil record, that reduces the proportion to just one in 120,000. Either way, what we possess is the merest sampling of all the life that Earth has spawned.
The whole chapter on fossils is worth a read. He mentions that Trilobites were both the most evolutionarily successful animal in Earth’s history, surviving about 300 million years all over the planet, and that they lived in almost the optimum environment to actually be fossilized – and even the number of Trilobite fossils discovered is still measured in the tens of thousands. A whole specimen is still quite rare.
EDIT – One last note, for another way to think of it – based on those numbers, we should expect that about 73 of the current 8.7 million species on Earth will appear in the fossil record in the future.
Bones also aren’t durable on that time scale, and yet we find evidence of bone, even if we don’t find the bone itself.
I would guess that ritualistic burial increases those odds a ton, humans currently preserve their dead intentionally, increasing the time that they could become fossils by at least an order of magnitude, and perhaps 3 or 4 orders. Just that observation alone makes it unlikely that we are going to be represented in the fossil record at an average rate.
The other issue is that humans have way more than just their own bones to get preserved. Most animals have their own bones, footprints and maybe a burrow, I have a house, car, some potion of all the roads, trash, pollution and all kinds of other crap to leave behind. It is still an enormous long shot that I will become part of the long term record somehow, but if we went by weight of all the things that could be preserved of mine the odds of something being found vs that of a similarly sized animal has to be a 1,000 times in my favor.
Sure, that all sounds reasonable. But if we say modern humans and our artifacts are 1,000 times more likely to be found than other species, that still means the odds are 1 in 120 that we get discovered.
Given multiple extremely rough, sketchy probability values being weighed against each other, so take the actual numbers with a grain of salt, but the point is that the odds are so heavily weighed in the other direction that saying something makes it more likely isn’t necessarily the thing that makes a difference.
Note, the point is also not, “There is no chance artifacts from a global technological civilization could ever be discovered.” Just that failing to find them isn’t proof of anything. It isn’t weird that we wouldn’t find them – all else being equal, you should expect not to find evidence of any particular thing that happened in the deep past.
Whilst absence of evidence is not evidence of absence, I’m pretty sure there’s a positive case against a dinosaur (or preceding era) advanced civilisation which doesn’t require us to try and use the very incomplete fossil record. It goes something like this:
1. There is no direct evidence of an earlier advanced civilisation on this planet.
2. Evidence such as the apparent disappearance of fossil fuels is not, without evidence of consumption, evidence of earlier civilisation. It is evidence that something happened but that’s not exactly definitive about what this was.
3. Our knowledge of what is requires to make a civilisation is perhaps biased by our own species’ experience, but certain prerequisites seem to be required (examples follow):
a. An agricultural surplus to allow the critical mass of civilised beings to shift to non-agricultural work. This is based on grass and cereals in our own case, and I’m not sure earlier plants had the same productive capabilities, although my paleobotany is not that strong.
b. A source of power to allow the development of metallurgy to the point that widespread fossil fuel extraction was viable (better tools, and pumps are kind of essential for this). For us this was charcoal, which requires trees or palms, neither of which I believe were widespread in the period of the dinosaurs. Tree ferns being frond based seem less likely to provide the required carbon as fronds don’t carbonise as well.
c. A market or extractive economy spanning much of the globe to generate enough demand to fuel manufacturing and to supply resources when the major inputs are still space-extensive raw materials. The implication of this is that we can’t argue for an isolated civilisation as civilisation seems to require a large amount of the planet to be involved.
4. It seems unlikely that a civilisation could develop without the conditions in 3. above. However different models have been proposed and it might be worth addressing a couple raised here:
a. Water and wind power. Note that (with charcoal) this was sufficient to get to about the eighteenth century, but its hard to see how the subsequent key advances that make these viable power sources for an advanced civilisation, primarily batteries, power transmission and improved materials, would come about without use of fossil fuels. These are key ingredients for the plastics and to a point modern ceramics involved for a start. And metallurgy to produce even nineteenth-century electrical systems needed reliable supplies of good fuel.
b. Nuclear and solar. It’s tempting to suggest that a civilisation that developed nuclear engineering without fossil-fuel backed technologies in material science would definitely go extinct pretty quickly… I can’t see how either technology, relying as they do on the conversion or containment of energy using materials developed with or from fossil fuels would be viable without fossil fuels.
5. Whilst 3-4 are not conclusive (and subject to correction by someone with better knowledge or who has read more relevant Wikipedia pages) it seems that if we are looking for civilisation we are looking for something that has utilised available resources in a similar way to our own. This means that the traces of civilisation would be recognisable to us, in particular metallurgy and perhaps the traces of radiation. Both should leave anamolous geological features that are not presently identified.
6. So any putative early civilisation has to have the prerequisites for civilisation in some form, and would likely have to draw on fossil fuels to develop from that point (apparently without leaving geological indicators that they had done so). Neither situation seems very likely. Any proponent of an earlier advanced civilisation needs to at least hypothesise where it developed the prerequisite resources or how the civilisation might differ. This isn’t meant to be a barrier to arguing the case, but a suggestion that there is a case that needs to be made beyond saying we can’t disprove the idea.
I am going to zoom out to the broad points, which are as follows.
1. You can’t compare humans to the average likelihood of being preserved because we are way off the scale in terms of both potential artifacts and our behaviors which make them more likely to be preserved. It is plausible that we are talking well more than 1,000 times more likely, heck I wouldn’t even say plausible. I would say that humans are way over 1,000 times more likely to be preserved than any individual species of hummingbird, shrew, or insect.
2. There are going to be a great number of signs pointing at our existence in the geologic record basically all going “something weird happened here”. You have a mass extinction event in terms of the number of species, but (perhaps) without a decline in total fossils left. After humans are gone there is likely to be a massive explosion in new species, depending on how we go you could well see the world overrun by the descendants of dogs, pigs, chickens, wheat, corn and apples or you could see all large land animals disappear and then have a new bunch pop up suddenly. There is going to be a weird sedimentary layer where it will look to future geologists as if someone dug up and concentrated all kinds of elements. The amount of gold in that layer will probably be curios itself, let alone the iron, copper, lead, aluminum etc.
The combination of these two strongly imply that humans will leave something behind that will point to our existence and that there will also be broader signs encouraging the interested to look where and when they need to so that they can find it.
I should note at the start that I do not actually believe in the existence of a civilization of advanced primordial serpent men, as awesome as that would be. I mostly just find the thought exercise interesting. That said…
Couldn’t you argue that the Paleocene-Eocene Thermal Maximum is at least circumstantial evidence for just that? The fossil fuels didn’t just disappear – something caused them to both be removed from the ground and burned, their carbon released into the atmosphere, causing a massive global warming effect.
In the original paper I linked to that started this, the primary thing they point to suggesting that this was some undetermined natural process and not intelligent action is that it happened over a period of a few hundred thousand years, and not a few centuries. This would certainly represent a very different/slower model of civilization than ours, but it does seem like it could plausibly represent consumption of fossil fuels. (Even though it probably wasn’t.) And of course this was 55 million years ago, so it would have been a hypothetical post-dinosaur civilization, but still.
I also don’t know enough botany to have the foggiest idea if it would have been possible to get agriculture going with pre-grass plants. But from above, it does seem like you could get pretty decent carbon-bearing fossil fuels, because something burned a crapload of them and caused a global warming mass extinction 55 million years ago.
Again, this doesn’t mean those fossil fuels were consumed, but it means they probably could have been.
(Also, I just looked it up, and trees originated 300 million years ago or so, and by the end of the dinosaur era they were pretty prevalent.)
This is the real sticking point, I think (and what inspired that paper that started this whole tangent.) The question of whether there would be any traces left in the geological record that we would recognize is apparently a pretty debatable one.
They make the point that most radioactive isotopes, for instance, don’t actually have a half life long enough to be detectable in the timeframes under discussion here. They do make the note that a full-scale global nuclear war would leave significant amounts of Plutonium-244 and Curium-247 that would last long enough and be spread out enough to potentially detect – but less global nuclear activity might be tough to spot.
Metallurgy seems even more problematic on the scale of millions of years. It’s not that there’s nothing that could be detectable – to quote, “anthropogenic fluxes of lead, chromium, antimony, rhenium, platinum group metals, rare earths and gold, are now much larger than their natural sources […], implying that there will be a spike in fluxes in these metals in river outflow and hence higher concentrations in coastal sediments.” The tricky part is that there tend to be plausible natural explanations for a lot of these types of events too. Proving it was caused by a civilization is the hard part.
To flip back to the Paleocene-Eocene Thermal Maximum again – imagine that hypothetical future civilization trying to deduce our own existence from geological records, without artifacts. They’ll see the results of the fossil fuels being burned, and they might find the metal residue we left in sedimentary layers. The question then becomes, though, how they rule out natural, non-intelligent explanations for us in the same way we default to the same for similar events in our own past?
The really fun part of that paper is that they go through and identify the likely geological footprints we would leave if you knew what to look for – and then look at several events we know about in the geological record that seem to fit some of those criteria. They don’t come right out and say, “Dude, dinosaurs with rocket packs!” but that’s clearly the wink they’re having at the audience.
But that’s really the problem they’re trying to solve – how do you tell the difference between the evidence we leave behind and, say, the Jurassic Ocean Anoxic Event? Or alternately, do we call that event potential evidence of an ancient civilization too?
(They do suggest that it seems possible that synthetic plastics might last long enough to be detectable and identifiable as synthetic, although we don’t have a great way to predict the decay process for plastics on a geological scale, so that one’s kind of an X-factor.)
@MrApophenia
I’d say the big importance of this debate is if we find a lot of old planets teeming with animal life. If we find such a planet one of the first things we should do (besides testing the animal’s intelligence) is search for artifacts. So these thought exercises are likely to be very important and valuable.
In such a scenario the easiest place to look would probably be objects like the moon where things would remain intact for millions of years at the least. Then digging up/using radar/advanced fossil hunting.
Thanks, that’s helpful. So, maybe 15,000 bones for civilized humanity since the early Bronze Age, and then we have to guesstimate the ratio of technological artifacts to bones, normalized by effective fossilizability.
We’re still going to need the math on this part, though. Tricky.
This is contradicted by the earlier stuff, fossils are not found randomly, they are found in specific types of rock, or get concentrated by rivers running through certain types of rock. Civilization tends to concentrate potential fossils even further. Every time someone finds half of a mandible that could have been an early human ancestor a dozen graduate students will spend a summer searching the surrounding areas for more remains. To discover evidence of dinosaur civilization you don’t need to randomly find a piece of it, you just have to find an interesting specimen of any kind near where evidence of civilization is and it is reasonably likely to be discovered.
I think this is wrong, we have a few thousand good skeletons, each with dozens to hundreds of bones, and tens of thousands of individual specimens. The total for all dinosaur fossils is probably in the hundreds of thousands at least.
Speaking of gold, are there any processes that would deform all of it completely over the next billion years?
Investment bars and wedding rings seem like a fairly unequivocal sign that we were here, even if only a few are ever found.
I did more reading, and you are right about the volume of individual fossil specimens. There is actually a database of every dinosaur fossil recovery site and fossil ever written up in a published paper (to as high a completeness as the project has been able to catalog, of course), called the Paleobiology Database – as of the most recent article I found it comes out to about 8,000 fossil sites, 25,000+ organisms, and lots and lots of individual bones.
However, even all that reading kind of reinforced the broader point too. Here is an interesting article on statisticians who tried to use the variance in frequency of various dinosaur fossils discovered to estimate the total number of dinosaurs, including undiscovered ones.
The results came back with a couple thousand total species of dinosaurs – which is almost certainly wrong, and the explanation was that the fossil record just doesn’t provide enough information to draw conclusions like that.
http://blogs.plos.org/paleocomm/2016/03/30/how-many-dinosaurs-were-there/
Steve Brusatte of the University of Edinburgh is sceptical though: “I would take these numbers with an ocean full of salt”, he said. “There are over 10,000 species of birds – living dinosaurs – around today. So saying there were only a few thousand dinosaur species that lived during 150+ million years of the Mesozoic doesn’t pass the sniff test. That’s not the fault of the authors. They’ve employed advanced statistical methods that take the data as far as it can go. The problem is the data. The fossil record is horrifically biased. Only a tiny fraction of all living things will ever be preserved as fossils. So what we find is a very biased sample of all dinosaurs that ever lived, and no amount of statistical finagling can get around that simple unfortunate truth.”
Is this commonly experienced?
I’ve found that it seems more common for religions to consider humans less powerful, less durable, less morally consistent etc. than the sapient beings they traditionally believe to originate beyond Earth, even if they do consider humans uniquely great in the terrestrial order.
Which religions believe sapient beings come from beyond earth? Scientology perhaps. Everything else is in origin heliocentric and so doesn’t have the correct mental concepts. Angels are not of this sphere, but are part of the world not aliens.
@Watchman:
They don’t need to be, for the point that religions don’t necessarily “consider humans uniquely great”; I was trying to taboo ‘celestial’ in my comment, not ‘alien’. Believing someone is not from Earth don’t mean one has to believe they come from space or other planets—one might conjecture them to be omnipresent or otherwise nonlocal, or maybe that they come from outside the simulation.
In light of that,
Belief that a sapient being created the Earth or the universe at large occurs in many variants of Abrahamic belief systems; this belief, requiring a being to exist either before the Earth or outside of our timeline altogether, implies that that being is not itself from Earth.
Not if the first gains an effectively insurmountable advantage, as seems likely (singularity). Then we only need to explain why our local effectively omnipotent agent chooses not to reveal itself to us; but that’s just N=1, so any number of explanations will do.
What do you mean by “probably”?
Eyeballing Sandford et al’s graphed Monte Carlo estimate, with p>0.9 the number of technological civilizations in the universe is either exactly one or greater than one hundred billion. So you’re going to need a something like a 99.999999999% probability of self-destruction prior to universal conquest, to make this work by any other means than “there can have been only one”.
Even assuming civilizations are absolutely limited to conquering no more than a single galaxy, and that their galactic conquests will not be be remotely visible, you still need 99.9% probability of self-annihilation, and that’s three unreasonable assumptions right there.
@Baby Beluga
Yup, that’s my understanding too. We very likely live under the sway of some effectively omnipotent entity, either one that took over its lightcone (which we lie within), or took over the whole universe (if lightspeed is not a fundamental limit), or many universes, or that is simulating us or otherwise created our universe.
Anthropic considerations make this even more likely.
Overall, growing up as a natural civ and finding oneself on the verge of singularity in an old universe is *extremely unlikely*. Only a tiny fraction of civs satisfy these conditions. (Admittedly I haven’t crunched the numbers, but this is my intuition).
At this point I am reminded of the Wearing the Cape series. At one point, a character (from the future) explains one theory of why there is no other detectable life in the universe (and they can detect much better than we can): the Hyperion Hypothesis that the odds of life arising in this universe are in fact actually zero. Life on Earth came from another dimension.
why counterintuitive? seems logical to me
it’s a very very outside view evidence, so it doesn’t really combine with most of your other sources of evidence abt the hypothesis [we’ll one day either destroy ourselves or take over the universe]
Do the authors offer an analogy to something else that we know to be true?
I know you’ve heard this before, but abiogenesis occurred really quickly. It took vastly longer for multicellular life to occur than single-celled life; if anything, I’d guess that’s where a lot of the filter (such as there is one) is.
I’ve only skimmed the paper and the abiogenesis supplement, but it looks like they’re modelling an extremely wide range of abiogenesis likelihoods on the grounds that we don’t have empirical data to go on and there’s a ton of theoretical reasons for a low likelihood to be plausible.
Empirically, we’ve examined one Earthlike planet in detail, and we’re pretty sure abiogensis occurred exactly once there: zero is unlikely since there’s clearly life now (but panspermia hasn’t been conclusively ruled out), and more than one would predict multiple families of RNA transcription mechanisms, which we fail to observe despite diligent effort. And that one is the minimum we’d expect to observe evidence of, since we need either abiogensis or panspermia in order for us to be around at all to observe it. Abiogenesis occuring in the first few tens of millions of years after the oceans formed does suggest a higher likelihood, but the lack of multiple independent abiogeneses on Earth suggests a lower likelihood, especially if we consider the possibility alluded to in the paper that the conditions for abiogenesis might only exist for a relatively short window (in which case early abiogensis wouldn’t mean anything because the choice are “early” or “nothing”).
It depends on how long it would take the results of an abiogenesis event to spread across the oceans and fill any potential niches for other abiogenesis-created primordial life. We’d still say life has a very high chance of arising even if things line up so it only happens every 50,000 years or so, if it only takes 25,000 years for the first one to spread across the world.
I read somewhere that before life (and the consequent coal and carbonate rock deposits) the density of carbon in the oceans was roughly the same as chicken soup. One mechanism for the first successful life to suppress further abiogenesis is just consuming all the free fixed carbon.
I’ve read that we know abiogenesis, at least to the point of the selfish gene, took less than a million years is because the entire ocean goes through the circulation through hot crustal rocks in that time, so nothing that is non-self-reproducing can accumulate for longer than that.
One can extend this to guessing that early life was a sequence of innovations, each one allowing the contemporary life-system to get access to greater stores of energy and/or carbon in the ocean, and thus adaptively radiating a new lifestyle and snuffing out the evolutionary advantage of duplicating that feat again.
There is one instance of this that we have some evidence for: water-based photosynthesis. The first organism that assembled the machinery to do that (and it is clearly a derivative of simpler photosynthetic systems) could obtain electrons from water rather than being limited to using low-energy carbon species. In that case, the new substrate wasn’t exhausted, but the new life form became the dominant auto/chemotroph (the base of the food chain) by sheer numbers.
Intriguingly, there is an alternative photosynthetic system — some bacteria use rhodopsin, I think. to extract energy from light, though they don’t use it to drive the water-to-fixed-carbon reactions.
Interestingly, the quality of life that is needed to successfully fill the ocean is terrible. Assuming that the ocean is sterilized by the crustal water circulation in 100,000 years, the first reproducer has to fill the ocean in maybe 25,000 years. But it takes only 120 doublings to turn a single bacterium into the mass of the entire ocean, so a primal life-form with a net doubling time of 100 years would survive.
Can someone explain the ‘crustal circulation’ thing? What is that? I’ve googled it and not found a definition.
I don’t see how that works; what is it that is supposed to “accumulate” in a non-living ocean?
If, say, p(abiogenesis) is 1E-9 per ocean-year, then Earth gets an ocean, and in the first million years, life does not exist. Anything that has accumulated in that ocean is by definition random sludge that isn’t life, isn’t selected for being life-friendly, and isn’t what we want, but that doesn’t matter because it doesn’t last and we start over in the second million years with a “new” ocean.
And life doesn’t evolve in that ocean, or the one after it, etc, but in year five hundred million A.O. and after five hundred sets of baby-free bathwater being thrown out and replaced, Gaia rolls a natural 18 or whatever and we get life. Which, being lively, fills all the niches and doesn’t go away. This is consistent with observation.
Oceans being cycled every million years can only affect the probability of abiogenesis to the extent that abiogenesis depends on many millions of years of accumulation of stuff that isn’t life, isn’t selected for life-friendliness, but is nonetheless necessary for life, and I don’t think that’s the case. It doesn’t take millions of years to salt an ocean with simple organic compounds, and anything more complex that hasn’t been evolutionarily selected for is almost certainly the wrong stuff that will have to be torn back down to simple organic compounds to be useful for life.
But, if it turns out that there is some long-term accumulation that is necessary for abiogenesis, the fact that it doesn’t happen (at least on Earthlike planets) is a case for abiogenesis being vanishingly unlikely, because now it can only happen if there is some freakish circumstance that causes an eon’s worth of accumulation to happen all at once. Which, again, is consistent with observation.
It seems plausible to me that there were multiple abiogenesis events, but only one has left surviving descendants.
If there are multiple possible mechanisms but ours provides a strong advantage, it’s entirely possible that abiogenesis occurred multiple times, but our protocellular ancestors crowded out all the other ones.
Multicellular life has evolved a bunch of separate times though. The wikipedia article counts 46 independent times in eukaryotes https://en.wikipedia.org/wiki/Multicellular_organism#Occurrence and a bunch more in prokaryotes (although they do not achieve the same complexity). At the very least that suggests that the filter is one of the adaptations of eukaryotes rather than multicellularity itself, given that multicellularity seems to develop frequently in eukaryotes. My guess would be mitochondria but there’s also an argument that perhaps prokaryotes could develop complex multicelluarity but are outcompeted by the existing eukaryotes in those niches. Overall though it’s not really that long after the oxygen crisis that we get eukaryotes and then multicellular life. That suggests that the lack of multicellular life was not because it’s hard for it to develop but because it requires an oxygen rich environment which took a long time to arrive after the first life. This has to take a long time because the oxygen released by photosynthesis has previously been chemically captured by dissolved iron and it was not until all of this iron was saturated that oxygen could accumulate in the atmosphere.
Looking at the history of life as a whole, it’s amazing to me how fast we went from the cambrian explosion to today. Multicellular life before the Cambrian explosion lasted longer than all the time since. On a geological timescale, we hit intelligent life really fast after getting molluscs. That suggests to me that the changes associated with the Cambrian explosion may be one of the harder hurdles to reach in the fermi paradox equation.
According to wikipedia, the earliest known evidence of life is 3.77 to 4.28 billion years old, or 120 to 630 million years after ocean formation. In the chart in your link the “Life” arrow should probably point to 4280m, and that’s the earlier end of the estimate’s range.
In the book Life Ascending many elements are covered. My take-away from the book is that the hard part of abiogenesis (the part which can’t be done with bucket chemistry) is the formation of the cell membrane. Just about all of the other elements such as RNA replication, etc., can be done at large scale with buckets full of stuff in a lab.
Thus it’s possible that the gating issue is the right situation in which what we think of as a cell membrane can develop such that cellular life can develop and expand. At that point, natural selection can take over.
I always got the impression the initial cell membrane is just some sort of bubbles which can come about by waves splashing on the shore, tidal vents. etc.
So the hard part is you need to get the right stuff trapped in the bubbles.
+1
What? Did they get self-replicating RNA molecules to arise from basic building blocks, without any material from living organisms to start it? I’d be extremely surprised at that. (And that’s the part I’d expect to be the hardest.)
I’ve never understood the issue to begin with, because it depends on the assumption that big, easily observed accomplishments are possible. Our current understanding of physics seems to be that they’re extremely difficult to the point that it’s likely nobody would bother. You can’t move or send messages faster than light, period, and that’s been our understanding for about a century now, right? Yes, you can have cryogenics or generation ships, etc., but the people you’re sending are, from Earth’s perspective, effectively ceasing to exist. You can send a message and get a response every eight years at the soonest. There’s no hope of practical trade and you’d go bankrupt trying to use it as a valve for surplus population.
The only reason you would have for doing it would be a commitment to preventing human extinction in the distant future, or insurance against asteroid strikes. But we can’t even be bothered to get off our asses about climate change, and we’re pretty sure that will have major repercussions in the lifetimes of today’s children. People just aren’t motivated to expend giant heaps of resources on fighting extremely distant or hard-to-conceptualize threats. Sure, we talk about it because we like the drama of it, but who honestly wants to leave everything they’ve known behind forever to live in under extremely harsh and confined conditions with no hope of help or contact with anyone but a relatively small group of other similarly beleaguered people? Probably not many people. It’d be a life sentence in a (very unsafe) sciencey jail. Is it that improbable that intelligent life elsewhere would have similar priorities?
(I’m also not sure why it would be easier to colonize a planet in another star system–which is not likely to be particularly friendly to human life–than to create habitats in orbit around Sol, or greenhouses on Mars or whatever)
From the POV of the subset of Earth that is incapable of thinking in terms of centuries, yes. From the POV of, e.g., the Catholic or Mormon church, no. From the POV of a starship’s own passengers, no. From the point of view of transhumans or aliens with millenial lifespans, no.
The assumption that all decisions regarding interstellar travel or communications, for all species ever, will be made by the short-sighted majority of contemporary humanity or its exact equivalent in every alien species, is not one you can make with the confidence that your argument requires.
Have you read The Sparrow by Mary Doria Russell? Relevant to your comment, it’s about a Jesuit first-contact mission to a new planet and its aftermath, with lots of meditation on theodicy. Thoroughly enjoyable to the irreligious, as well (though I suspect growing up Episcopal at least gave me some theological background).
If you were a digitally uploaded transhuman with an effectively infinite lifespan, would you hang around for a very long time in some no-name system far from home? Or would you transmit yourself back once you’ve got your fill of exploration?
Why not both?
Well, with the long-lifespan aliens, you’re arguing that evolution will produce a species that calculates risk-reward/cost-benefit very, very differently from our own, regardless of how long it lives. What kind of environment would produce a creature that reasons in such a fashion, ignoring practical questions of expense and probable outcome and throwing itself on a grenade for the sake of an extremely large and abstracted group’s theoretical benefit in a distant future? Even assuming that it is feasible for a species to live that long (and not have horrible overpopulation problems). Not even getting into the transhumanism business, I’ll just say I have feasibility doubts there as well.
EDIT: This is to say, we don’t “think in terms of centuries” because that kind of thinking will get you killed on the savanna, or any other kind of evolutionarily competitive environment.
You don’t need to posit long-lifespan aliens at all. Entirely human institutions have observedly undertaken centuries-long projects with no immediate survival benefit; the Church’s cathedrals are the immediate example that comes to mind.
The idea that all of humanity will ignore the opportunity due to short time horizons is not credible. 95% of it, sure. But the remaining 5% will be the ones who actually do spread out and colonize the universe.
You can see the cathedral being built. It’s right there in the middle of the city, being raised up stone by stone. Just seeing it get built is a testament to the bishop’s power and prestige. People in another star system are invisible. Any conversation you attempt with them will be like trying to stream video transcribed from a dialup modem into morse code, sent by smoke signal, then reconverted into video by Navajo windtalkers. Those people, that ship, are gone. Even assuming they can recreate more than the palest miserable shadow of civilization on a hostile planet on their own.
I could see an interstellar voyage being launched by fanatical separatists, I guess. But anybody who wants to escape the rest of the human race that badly will not have good odds of holding an expedition together in tight quarters under stressful conditions.
I think you underestimate fanatical religious separatists. For one example springing readily to mind, consider the early years of Mormon Utah.
Utah is not one thousandth as remote or hostile as an alien world would be. Plenty of irreligious or modestly-religious people settled desolate parts of the American West.
I think you’re typical-minding here. You’re assuming the bishop ordered construction of the cathedral to project his power and prestige, rather than his stated (and probably actual) reason: to glorify God.
There are a small number of people who would commit to freezing themselves for an interstellar journey to explore and found a civilization, knowing no one on earth would ever know what they did. Some would even do it for the purpose of glorifying God.
While I obviously can’t go back and interview Medieval bishops, I’ve seen a lot of modern shiny-church-building drives. It is totally a prestige thing. Not to the extent of complete hypocrisy–part of it is that a big church might draw in the unbelievers–but there’s a reason Puritans et al reacted against church ornaments. It’s very, very easy for the cathedral to turn into the bishop’s giant stone (ahem) erection. Why should it surprise anyone if powerful men with miters have some of the same motives for building enormous edifices as men with crowns?
I’m not saying no one made cathedrals to show off, I’m saying some certainly did to glorify God. I had this daydream myself, as a Catholic and an electrical engineering student working in a robotics lab. I thought how neat it would be if eventually my robots could lead to the whole “post-scarcity society” thing where you could just build anything. I would design a beautiful cathedral my robots would build and call it “The Cathedral of the Miracle of the Multiplication of Loaves and Fishes” and feel deeply satisfied attending Mass in it. That wouldn’t have anything to do with showing my “power and prestige” because in the post scarcity society everyone else would have the same robots, too, so it wouldn’t really “cost” anything. It would just be for God.
It seems like there’s a general disagreement with your argument that people are making in different ways, and is one of the frequent objections to most proposed candidates for The Great Filter: it requires universality but is almost certainly not universal. That no subset of any civilization would bother with the large expenditures of space colonization given the downsides. Space exploration is not a Moloch problem like climate change. It doesn’t require everyone to coordinate. Lots of nations have launched space exploration missions with small fractions of their budgets. Elon Musk is trying to do it all by himself. If technology eventually improves such that interstellar travel is possible, albeit expensive, all it takes is one Future Musk here or one Alien Musk somewhere in the galaxy to get it going.
For another objection, if you insist that the only reason anyone does anything is peen waving, then okay, it’s not inconceivable that a future space race is “whoever can launch a credible colony ship first wins.” That’s the Science Victory condition in the Civilization games.
Yes. I’ve played Civ. But it’s a video game. And once you’ve landed a single ship, the accomplishment is done, which is sort of the point; it’s a thing you do just to show you can. The same way we landed on a desolate rock in the sky. Then did it some more. Then everybody got bored and we stopped doing it, and now we lack the capacity for doing it any longer, and it’s nobody’s priority except for places like China who haven’t done it yet. There’s talk about private spaceflight, and that’s exciting because that’s never been done.
If you want to tell me that we’ll launch a mission for the same reason, say we do. What then? There’ll be initial excitement, then decades of waiting, during which time a lot of other things will happen and civilization will advance to the point of being unrecognizable to the people who left it. By and by they’ll land on some wretched rock and set about establishing a base, sending back lots of updates about equipment maintenance and radiation concerns. Assuming they manage to set up a base that doesn’t flounder, they will be, basically, an unedited livestream of The Martian that goes on indefinitely, doesn’t get scripted to be exciting, and features old technology and values decades out of sync with what prevails back home. If it doesn’t work we’ll get scraps of information about whatever catastrophe killed them, possibly not even enough to tell what happened. How motivated are we going to be to start other missions?
I’m not arguing that it can’t happen once, I suppose. I’m arguing that the Drake Equation depends on it being obvious that everybody is going to reliably and persistently engage in a distinctly unprofitable form of behavior that no conceivable evolutionary pressure would select for.
Has there been a sci-fi story with this premise? Civilization sends out a colony ship, Earth goes on like normal, colony checks back in and then the Earth declares war on them because the colonists are now viewed as barbaric?
Why would we declare war? They’re decades away and can’t touch us. Possibly if they disgusted us enough we’d send some kind of WMD, but that seems extreme. More likely we’d just roll our eyes and stop watching after a while. Nobody wants to be the human race’s embarrassing time capsule.
(if you’re referring to an actual science fiction story, I’m not familiar with it, no)
I mean the American colonies were kind of a backwater to Britain for awhile. Eventually (after ~300 years, 1600-1900) the backwater surpassed the mainland though.
Also “prison colonies” like Australia which are somewhat of a staple in Science Fiction. For instance I believe that’s how Starcraft is setup.
“civilization will advance to the point of being unrecognizable to the people who left it”
Why would be it unrecognizable? Technological evolution can’t go on forever, nor can economic growth. The languages of the future will be koineized. Our descendants will likely be irreligious; there will be no great revivals. Politics, likewise, will reach an equilibrium of peace and stagnation. And, importantly, those people who lived a thousand years ago will probably still exist due to indefinite lifespan.
I don’t see any of that happening, but I also don’t have the energy to prolong this thread of responses any further than I already have, sorry.
“But anybody who wants to escape the rest of the human race that badly will not have good odds of holding an expedition together in tight quarters under stressful conditions”.
But would it be possible to modify (culturally and/or materially) humans for the purpose of taking such a trip? Make them less claustrophobic (less phobic in general), more mentally stable and resilient, more optimistic and trusting, etc.. Or replace them for the duration of the trip with the much-vaunted AIs.
People do survive and live in extreme conditions, such as the South Pole research stations or in the desert. This suggests that an interstellar trip need not be impossible.
Maybe we don’t have the technology now, but what about 1000 years from now.
I thought the Amish were going to take over. See also Muslim birthrates. The future belongs to those who reproduce, and that is not the irreligious.
The Amish might take over, and will do so to the extent that they would likely populate not just Earth but the Galaxy as well.
However, I sort of view this in the sense of bacteria vs. humans. The Amish will far outnumber the techno-atheists but the atheists will still very much be a thing. Just as bacteria/mice still outnumber humans but humans are still a thing. Also the Amish defection rate will likely stay at at least ~5% (it is 10% now but is being bred out).
That isn’t of course to stay the Amish (or another group like them that doesn’t have problems with space travel) won’t continue to expand. Just that there are likely to be irreligious people in the future, if for no other reason than that some religious people are likely to have irreligious children.
Sort of like how Medieval London kept getting populated by the children of rural farmers and wasn’t populated by the children of London urbanites.
“I thought the Amish were going to take over. See also Muslim birthrates. The future belongs to those who reproduce, and that is not the irreligious.”
I assume they will be irreligious based on my assumption that the people who are advanced enough to leave for another star system, call them “posthumans,” will have genetically engineered themselves to be intelligent, more intelligent than any modern human who has ever lived. At this level of intelligence, they will be simply too bright to honestly believe, and I think this will be the case even if they are descended from modern “religious fanatics.” And I don’t think the Amish or Muslims will completely “take over.” Neither group is militarily powerful. If the Japanese population stabilizes at 80 million while the Nigerein population goes to a billion, it doesn’t mean the latter will rule Japan. The Japanese would have to let them. In addition to favoring religion, natural selection will operate within the irreligious population, as the groups which embrace ethnomasochism disappear.
That argument sounds good, but I don’t find it convincing. It’s very easy to imagine humans doing this. In fact, if they had had the technology, I find it harder to believe that various 20th century ideological groups wouldn’t have done this. Sure, it hasn’t happened yet, but that’s partly because we don’t have perfect filtration and recycling systems, or a great idea about where to go. Humans doing this is not just plausible, but likely in futures where we survive.
I submit to you that the aforementioned savanna created such a creature.
No, it did not. We spent decades threatening each other with nukes rather than negotiate peacefully, and we’re now ignoring climate change because it would require us to forfeit short-term economic growth for an uncertain future dependent on everybody else making the same prisoner’s-dilemma decision.
Human beings do not act in a disinterested, big-picture way. We prefer the short, sure payoff. We like to dream about grandiose projects like space colonization, but we also like to dream about time travel. Or lightsabers. Or steampunk. Impractical fantasies are great fun; it doesn’t mean we’re going to try them.
I’m not sure what you mean by 20th century ideological groups. I could picture Hitler or Stalin sending off a train of mostly-willing zealots to colonize wherever, but I don’t give such a mission great odds of success, let alone serving as the foundation of an interstellar empire.
You are assuming that “humans” are one big group that makes a decision.
You have to argue that everyone of means stays home. A bunch of people can make the irrational-to-you decision to leave home. This doesn’t mean they don’t exist. It means your model is wrong.
The fragmentation of the human race makes this still less probable, because an expedition’s likelihood of success would increase dramatically with size and wealth. To say nothing of other human factions viewing the colonization effort as a potential threat and sending a fusion missile up its tailpipe, or a sudden conflict or political realignment terminating the project. A unified humanity might feel comfortable enough to waste cash on silly schemes like this; if it takes money out of the defense or intelligence budgets, it’s going to be a hard sell.
This is a familiar objection to spaceflight, which is “since humans haven’t united to promote my policy preferences, they don’t get to unite to promote anyone else’s policy preferences.” It’s slightly better than the guy who wants to say this discussion is really about giving mortgages to minorities.
I can imagine an Earth that stops anyone from leaving. But you have to force an explanation that stops every culture in the galaxy from allowing anyone to leave. China shut down Zhang He’s fleet, and a court member could argue loudly that of course it was irrational and of course no one would do it, and it probably sounded convincing until the British sailed in.
And even during the Cold War, “decades threatening each other with nukes rather than negotiate peacefully,” the US and the USSR launched lots of space missions without triggering a nuclear exchange.
Zheng He’s fleet was perfectly rational, as Europeans proved shortly after with their own voyages. Transoceanic voyages, while risky, had enormous profit potential, which is why most countries with the means launched them. The laws of physics as we currently understand them do not allow interstellar voyages to turn anything like a reasonable profit, or produce anything of measurable benefit to the entity which launches them. They are tantamount to throwing money in the trash for all practical purposes. You are comparing apples and oranges.
NB that NASA can’t even get men into space on its own anymore. They can’t get the budget. Why? Because we’ve already done our bit of penis-waving in space, and now it’s up to China and other countries who feel they have something to prove. It’s pretty plain that space is dangerous, and very unhealthy when it isn’t dangerous, and it simply isn’t worth the price.
Zheng He’s fleet was perfectly rational
Yet they stopped. It’s like people don’t obey your laws of rationality. 🤔
The laws of physics as we currently understand them do not allow interstellar voyages to turn anything like a reasonable profit, or produce anything of measurable benefit to the entity which launches them
They have different values than you.
Yes, they stopped, and the Chinese suffered for it. The Japanese undertook a somewhat similar policy a bit later, and came out relatively well. There can be more than one rational choice, and neither group had enough information to foresee the consequences of its actions anyway.
We have known about the speed-of-light limit for many years now. There’s no sign that it’s going anywhere. Interstellar colonization is a plain irrational choice, unless you want to appeal to a nebulous concept of values that could be used to justify anything. Which it can–it made the Mayans habitually jab their genitals with thorns, and Carthaginians murder their children–but that doesn’t make one particular irrational choice a given. Especially when that choice has to be perpetuated, endlessly, over many centuries. We’re not talking about one voyage, but a pattern of colonization which we expect every intelligent species, or at least most, to continue, right?
is a plain irrational choice
Okay.
The trick with the transhuman side of it is also that being possible would make space travel much more feasible – but also likely make the aliens much less detectable. If I can fit a billion uploaded minds into a computer the size of a soda can, sure, they can probably go to space. But they could also potentially have a galaxy-spanning civilization with so little visible impact we’d never spot them.
Counter-point: The existence of Space Amish who don’t believe in mind uploading. It’s plausible that when mind uploading becomes a thing for humans quite a few will refuse to do it.
Sure, but they don’t really matter for the purposes of this question. Transhumanism is typically brought up as an answer to the idea that interstellar travel/colonization is prohibitively difficult and thus will probably never happen. If you get mind uploading, you dodge that problem – space colonization becomes much easier.
However, I’d argue that it becomes *so much* easier that it also means you’re no longer even thinking about colonization in any sense we would likely recognize/be able to detect.
Meanwhile, the space amish will all just stay home and continue living on their original planet due to the near-impossibility of space colonization.
I’d expect the energy needs of mind-uploaded humans to vastly exceed the energy needs of non-uploaded humans, assuming you’re also giving them enough stimulation to avoid insanity. The brain already hogs an enormous amount of the body’s energy; add on hardware emulation and anything approaching a realistic perceptual sim, and you’re looking at a lot of juice. You could simply have them turn off for the voyage, but that doesn’t sound much different from cryonics in practice. Are you referring to acceleration tolerance?
Acceleration tolerance, longevity in storage, basic mass… Instead of needing to keep a crew alive (even in a frozen state) you can pretty much record them on a fairly stable material, launch them, and just let them power up on arrival wherever they were sent. You don’t need to bring food or life support or supplies like that, either. All you need is enough machinery to make more computers with the materials you find at the other end.
And heck, if you’re dealing with mind uploading, it doesn’t even need to be a one way trip – once you do get some infrastructure set up on the other end, you can start transmitting mind-states back and forth at light speed. (This is basically the premise of ‘Altered Carbon.’)
Self-replicating probes and computers that can survive the voyage are certainly still non-trivial technological achievements, but they seem a lot more plausible than keeping a colony’s worth of biological humans alive.
(Also, re: power usage, remember that even if they’re awake, they don’t need to run at realtime speed. You can save a lot of processing power by running your perceptual sim much slower than the real universe, which has the added benefit of helping the boring parts of space travel pass by even during the parts where they’re not fully deactivated.)
If you’re going to do all that, you can also feed them the inputs that simply tell them they’re traveling to an exotic star system, keep them in a closet somewhere, and use all the money you saved to buy, I don’t know, a lifetime supply of cheesecake for all the non-digitized people in your society. Or whatever your non-digitized people happen to like. I’m a cheesecake person, myself.
“Meanwhile, the space amish will all just stay home and continue living on their original planet due to the near-impossibility of space colonization.”
But this is it — the real Amish didn’t stay home, they took a ride to the New World. Religious fundamentalists are, in general, much more flexible than the above stereotype would indicate.
Once the technology gets invented, probably some Amish will upload themselves, go to the nearest habitable planet, download themselves again, and colonize it, just as they’re doing in the Americas. Some of them will be fine with just using the technology, as long as they don’t have to understand and manufacture it themselves. Others will stop being Amish. Others will just stay home. This is how (cultural) evolution works.
To me, the fact this hasn’t happened yet shows:
1. Uploading oneself Accelerando-style is probably impossible.
2. We lived in an exceptional era of exponential growth, which falsifies our perspective. We should expect the rate of scientific and technological progress to slow down again at some point.
Probably linear or sublinear growth is more typical. Even though 1300s knights used better technology than the Sumerians, it was still recognizable. I think something similar is about to happen: 1000 years from now, people will still be using rifles, pistols, rockets, planes, cannons, and tanks (and maybe lasers).
If we are at a typical point in the evolution of a starfaring civilization, there may be roughly as many “filters” ahead of us as behind us. There may be no magic trick that solves every remaining problem, just the slow and steady accumulation of solutions and knowledge (until, hopefully, the next exponential growth burst).
Maybe the initial space colonization is so difficult and time-consuming that by the time any civilization has done it, they’ve “solved” issues of over-population and uncontrolled growth, so there’s no particular pressure to go anywhere else permanently on any near-time frame. You can always watch the universe with massive space telescope arrays and occasional robot probes, and if one of the latter went through our solar system as recent as 600 years ago we’d be none the wiser.
Yes, I know it “only takes one”, but it’s not like any civilization can just spit out self-replicating starship missions. The type of infrastructure that takes probably imposes its own cultural constraints on how a civilization might behave.
Maybe there is no “particular pressure”, but if you’ve created a stable long-lived civilization, there is no reason not to colonize space over the span of millions of years.
Sure there is: opportunity cost. It would require money, time, and effort to launch such a voyage, and every generation gets to decide for itself whether it wants to spend those resources on chucking people into an oblivion hole or on building a relativistic accelerator waterslide.
Yet. 😉
“Maybe the initial space colonization is so difficult and time-consuming that by the time any civilization has done it, they’ve “solved” issues of over-population ”
On Earth, among developed societies, the population has stabilized. But here’s the thing, and Scott did an earlier post mentioning this: In 400 years the Amish are going to takeover. That is to say if most of the population has a growth rate of 0% annually and a subgroup has a growth rate of 3% annually then that subgroup is going to takeover. This suggests that at least there’s not really a peaceful way to deal with an overpopulation problem.
That will only happen if said growing population doesn’t have massive losses to defection to out-groups. Moreover, the Amish are not static as a group either technologically or culturally – it’s not a stretch to imagine them eventually adopting norms for lower fertility (especially if their population has grown enough that conflicts with outsiders have heightened because of it).
I think this is an extremely important point. Even if the galaxy has thousands of civilizations trying to be as visible as possible for the last billion years we probably wouldn’t notice them because space is fucking huge and there’s only so much physics allow you to do.
If they can spread at all, it takes less than a billion years[1] to colonize the galaxy.
[1] This number obviously has wiggle room, but if you can launch at 0.01 the speed of light, it’s 470 years to Alpha Centauri, then a few thousand years to rebuild enough wealth to launch to the next system. This leaves out more mundane options like spreading out through the Oort cloud which might reach the next system’s Oort cloud.
A billion years still gives you a 1/13 chance that expanding civilizations simply haven’t arrived at Earth yet (although if they were doing massive cosmic engineering, we’d probably notice).
One key assumption of the Fermi paradox is that an advanced civilisation would colonise the galaxy with self-replicating drones. These basically spread everywhere. That means we would notice, because at the very least there’d be an alien spaceship circling our sun, keeping an eye on things.
Okay, so we send out probes to far-flung star systems. What do we do with the information they send back? By the time they get there and send their reports all the way back, the civilization that sent them will have changed dramatically. We might not even be listening at all. Any information we receive from them will be years out of date. And … for what? What are they going to do, in this other star system? Launch an impractical colony ship? Or throw interstellar WMD at us just to be dicks, thereby provoking retaliation for no reason?
Judging by our species: reality TV.
Lack of interstellar colonization would be due to the entire species sitting on whatever they have for asses and watching the 500,000th season of Those Wacky Monkeys!
Would we notice it? Space travel in general strongly encourages the reduction in mass to save energy. Imagine an alien spacecraft only massing about ten metric tons, operating on low long-term power, and occasionally warming up every so often on a close approach to the Sun in an elliptical orbit to send a tight laser-beam transmission about what it sees back to another location.
It is not. It is a lot easier, and the people involved do not vanish from the realm of human concern.. Which implies that the natural course for even an expansionist civilization is to not send a single starship until you are done turning every spec of non-hydrogen matter in your solar system into dyson swarm habitats.
How much fun would you have maintaining a relationship with somebody where you could exchange messages once every presidential administration at the earliest? Eight years of latency to Alpha Centauri; every two and a half messages, another generation has grown up. If civilization advances enough to colonize another star system, it probably has the tech needed to recycle or synthesize any material far more cheaply than it could hope to acquire it by the inconvenience of interstellar trade. And as civilization advances, events and changes tend to occur more quickly; the homeworld would move on.
Do you believe that humans a thousand or million years from now will still live only three score and ten?
I doubt we’ll be around in a million years, and I’ve no idea as to a thousand. But I don’t think we’re going to escape Malthus in spaceships, and I’m as deeply skeptical of negligible-senescence claims as I am of interstellar colonization, digital immortality, and what-have-you. At any rate, for the sake of the post you replied to it doesn’t matter, because we’ll still grow up in twenty years, and thus generations will stay the same length regardless of how many of them there are alive at any given time.
Immortals are not necessarily going to be any more interested in conversations with that much delay in them. For some types of immortals (Uploads running faster than we do, for example) they would be even less interested.
theredsheep:
Fair enough. But that’s different objection: since a million years is sort of the lower consensus bound for populating the galaxy, you’re saying that L is too small for a starfaring society to develop.
I thought it was implicit in your objection that policy/zeitgeist was mostly determined by the latest mature generation. That’s not a bad assumption when our lifespan is just a few times the generational interval, but it strikes me as unlikely if it’s a hundred times that. So I would have stuck with your initial skepticism about SENS.
Thomas Jørgensen:
Perhaps. But an upload who happened to be interested in the interstellar might well deal with that by suspending for long intervals. And transhumans in general might design themselves to have a longer attention span, and/or a greater propensity for multitasking. We enjoy things like birthdays and summer vacations at least partly because there is a fairly small number of them in any given life. Somebody who lived to be 5000 might get analogous joy from something that happened every hundred years. It’s not like they’ll have to sit twiddling their thumbs in between.
Pausing to skip the waiting means you are choosing the conversation over the entire rest of your culture and social graph. I mean, if you are an upload society with casual forking, maybe some forks will do that, but in that case, you might as well transmit your entire mindstate on the com laser, which is no longer a conversation, it is emigration.
Sure. That’s another possibility. People are all different.
Isn’t this really the same problem as the human colonization of Earth? Humans live on basically all land surfaces of the Earth outside of Antarctica, from the northern Canadian islands, to the Sahara desert, to the Kalahari, to the Outback, to Patagonia, and have for tens of thousands of years. (I suppose the northern Canadian islands might have been somewhat later – Iceland and New Zealand were only in the past thousand or so years.) It’s not immediately obvious why it would have been better for any of those people to move ten miles further into the Arctic, or another ten miles further into the Sahara, or another thousand feet up the slopes of the Tibetan plateau, rather than just living in some denser arrangement with the other people in the more favorable environment. But this is just a selection thing – as long as *someone* decides to go into the new terrain, and their descendants don’t *all* die or go back, humans end up living in those new environments.
To get from Alaska to Patagonia over 2,000 years, you need the frontier to move an average of about 5 miles per year. That works if once a generation, some group decides to set out and move a hundred or so miles down the line, and in the next generation some group does the same. However, in every generation, most people stay behind and populate the area they grew up in. The people at the very frontier are descended from dozens of generations of individuals that each left their home for some distinct reason, but most people in most places never do.
As long as once every few thousand years, some group on a planet decides to send out ships to colonize another planet, for whatever reason, you still get several hundred incidents (with Fibonacci/exponential growth in the total number of planets covered) in a million years.
Just gonna reiterate that such colonization, which I agree seems totally likely and plausible, would lead to visible dyson swarms in far off galaxies if not our own. Thus easily observable.
Although that assumes dyson swarms are the path of future technology.
I think you’re DRAMATICALLY overstating the ease of observing ANYTHING on an intergalactic scale. Even directly observing planets orbiting stars in our OWN galaxy is on the very cutting edge of what is realistically possible given current technology…and frankly we’re about to start hitting some hard physical limits on that front.
Eh, the whole Dyson Dilemma goes like this:
“Alien civs necessarily build Dyson spheres as its the most efficient way to harness energy. Dyson Spheres block out Starlight”
It’d be really easy to observe. You’d just notice a gradual dissapearance of galaxies. For instance if you look back 5 billion years you might see 15 and if you look back 100 million you might see 5.
If a Dyson Civ existed in our galaxy we shouldn’t be able to see any stars at all. We do. It’s an easy observation.
Now, all this makes me believe is that Dyson Spheres are not the endgame of technological progress(other people who believes Dyson Spheres are usually argue the Great Filter happened in the past, not the future, but based on this thread I’m more confident its in the future). Instead there’s some other method, such as perpetual motion or jumping into other Universes, that offers a better source for energy than Dyson Swarms.
Great youtube channel that digs into the whole “Dyson Dilemma” issue:
https://www.youtube.com/watch?v=94iDdHRa2X4&t=5s
Not necessarily that straightforward. The most luminescent stars are also the shortest-lived — your average red dwarf lives about a trillion years, Sun-like stars about ten billion, Betelgeuse-like stars about ten million. If an alien civilization preferentially selected long-lived stars for Dyson shelling — perhaps because they didn’t want the shell blown apart by a supernova in the astronomically near future — we might notice almost no difference from intergalactic observations.
“But we can’t even be bothered to get off our asses about climate change, and we’re pretty sure that will have major repercussions in the lifetimes of today’s children.”
Who is “we?” I’d give it about a 10% chance of being a hoax and a 30% chance of being a net positive, given people’s revealed preference is for warmer climes(compare population movements into the oven that is Arizona versus the freezer that is Alaska) which may be enough to balance out the costs of adaption. Given the massive cost that would be necessary to reverse it, I’d say what we’re doing now is about right. In any case, our descendants will almost certainly be genetically engineered to be much smarter than us and thus incomparable.
“People just aren’t motivated to expend giant heaps of resources on fighting extremely distant or hard-to-conceptualize threats. ”
Ever heard of religion?
“Sure, we talk about it because we like the drama of it, but who honestly wants to leave everything they’ve known behind forever to live in under extremely harsh and confined conditions with no hope of help or contact with anyone but a relatively small group of other similarly beleaguered people? Probably not many people. It’d be a life sentence in a (very unsafe) sciencey jail.”
The first pilots experienced freezing temperatures, deafening noise, and a high probability of death. Their descendants fly in quiet, climate-controlled, and quite safe passenger airplanes. Our descendants will have perfected interplanetary travel by the time they launch an interstellar mission, their ships will be safe, comfortable, and large.
…and then (the ones that survived, at least) went home to their families and friends and comfortable houses.
…and still headed towards a desolate rock at least 4 light-years away from everyone they know and care about on Earth. No matter how advanced and utopian the exoplanet colonies are, it’ll still be 4 years to get a message back to earth, and another 4 years to hear a reply.
If interstellar travel was as safe as airline travel currently is per mile the average passenger would die 50 times before they got to Alpha Centauri.
Well, I’ve argued here and elsewhere that “there’s no strong reason to assume that the evolution of intelligent life is anything more than a once-in-a-universe rarity”. And I’ve pointed to the huge error bars on e.g. p(abiogenesis) for that. But this is I think a new and useful mathematical formalism to shore up that idea, in this specific context and others. Thank you for finding it.
It will, I think, require guarding against a new version of overconfidence. Among the enlightened minority that recognizes that probabilities come in flavors other than “basically zero” and “basically one”, there is nonetheless frequently the second-order overconfidence that whatever the assessed intermediate probability is, its error bar is basically zero. In reality, even the two-sigma error bar will often span the range from basically zero to basically one.
Why don’t more people in my life talk like this?
+1 to you sir.
I don’t know about the rest of you, but I probably give too much credence to debunking.
Only true probabilities should have error bars, i.e. if an outcome is genuinely random and we estimate the probability of that outcome, our estimate can be wrong, so it needs error bars. Where the uncertainty is epistemic only (i.e. with perfect knowledge we could assign probability zero or one), then it makes no sense to give the probability an error bar, because there’s no true probability that we’re trying to estimate.
In this case many of the various factors are true probabilities in the model — e.g. the probability that given a random planet with unicellular life, that planet will evolve multicellular life in a certain timeframe.
Correct, but the comment might have been read as if we should never state probabilities without error bars, which would be wrong. E.g., I estimate there’s an 89.6% chance that you’re male, based on the survey. Even though the survey might be unrepresentative, it doesn’t make sense for me to say there’s an 80-95% chance that you’re male (say).
Agreed, and particularly where you only intend to make a single test there’s no real meaning to an error bar around a mean probability. P(nybbler=male) – 0.896, and that’s all there is to it. If I caused any confusion by my imprecision on that point, I apologize.
I disagree, for reasons I can’t articulate well.
For one, error bars are still objectively useful in a chaotic system. ‘The temperature tomorrow will be 22 +/- 3 (95%)’ is a useful thing to know.
In a perfectly deterministic world, you’re still looking at a distribution over imperfect inputs, resulting in a distribution of outputs, which you can use to make error bars.
If there were a Big Book of Quantum Events sitting on the Almighty’s coffee table, detailing which nuclei decay when, does that change anything.
I’m not sure I agree with this. If I am somewhat uncertain of how likely an event is, I might be willing to offer a bet where I pay $0.20 for the chance of getting $1.00, but be willing to risk $1.00 only if I am paid $0.80. In this case, it seems reasonable to model my beliefs as “probability of 0.5 with error bar 0.3”
My cynical view on “why no one has thought of this before” is that thinking about the Fermi paradox doesn’t give you tenure, so no one has brought the full weapons of overthinking to this problem.
I had a few technical questions (I have not thought about these deeply) and would love to hear if anyone has insights:
– Why is log-normal as a prior?
– Why do they assume these factors are independent?
– Even if you remotely believe their analysis on f_l, f_i, then there’s >50 orders of magnitude uncertainty on a quantity that it seems we can experimentally get some evidence on. Even a tiny bit of evidence should result in huge reductions to this uncertainty. Where’s the literature on this? Why haven’t I heard about it?
You get normal distributions if you think that a lot of small additive factors contribute to the random value.
You get a lognormal distribution if, rather than additive, you think the factors are multiplicative. The Drake Equation is explicitly multiplicative. Even if the distributions for each parameter were strongly non-lognormal, the distribution for the drake equation would be somewhat lognormal.
Lognormals: As Izaak said, lognormals appear naturally when you multiply a bunch of (finite variance) random factors. In fact, for most priors we used loglinear distributions, which are approximately scale-free. But the exact distribution does not matter much for the argument as long as it is log-broad.
Independence: The Drake equation is in a sense built to slice nature at its joints, hopefully making the factors independent. We talk in Supplement II about possible dependence, including the neat result of Verendel and Häggström that enough dependence between priors can make the great filter argument misbehave, but it doesn’t look like it messes up our argument much.
Uncertainty: We think there really is a ridiculous amount of (often unacknowledged) uncertainty that could be moved a lot by new results and observations (and give our arguments). But this journal club might be really good for checking this assumption.
It’s because it’s still just making extremely broad assumptions about the potential parameters, even if we are then trying to calculate the probabilities of particular scenarios with those parameters.
We just need more data. I don’t think we can really say anything meaningful about the Fermi Paradox until we can direct image Earth-equivalent planets (same type of star, same size planet, similar orbits) within a thousand light-years or so. If we don’t find any Earth-equivalents (or planets that show strong indications that an Earth-like biosphere is covering the surface), then complex life is probably very rare. If we find such planets but no indications of intelligence, then the biologists are right and intelligent life is probably extremely rare.
I don’t think that it’s abiogenesis that is a problem. Out of the 5.5 billion years that or planet is habitable, it took 4 billion years to evolve vertebrates. Abiogenesis happened almost immediately, that doesn’t make it seem like a very unlikely event. But if the sun had been a bit bigger, earth‘s history of life would have ended with fried frogs.
Yes, the real question doesn’t seem to be abiogenesis but the long slog from “life” to “culture”.
Hmm, maybe you’ve put your finger on the important thing — accumulating the innovations to get to cultural evolution is a slow process and the evolution of stars is significant on that time scale. Even the “fast” processes since the Cambrian Explosion took 10% of the habitable lifetime of the Earth.
I’ve also seen assertions that getting Earth-like planets requires that the interstellar medium has enough “metals” in it — which are produced by supernovas. And that the Sun is about the oldest star that has enough metals to form the Earth.
Be careful: probabilities are conditioned on the fact that it happened. From Hanson’s essay on the Great Filter:
Granted, though: from this viewpoint, abiogenesis indeed looks like it’s probably an easy step.
The authors need some Earthbound examples of their logic playing out so we can check up on their math. Do they offer any?
Not in the paper, but isn’t this basically the Black Swan argument for how models assigned the 2008 crash a 10^-10 (or whatever) probability and then it happened?
The 2008 financial crisis is often called a black swan event, but the guy who popularized the idea (Taleb) didn’t consider it to be one.
I’m going to have to go back and reread Taleb because my interpretation of him seems to be at odds with 9x% of other people’s interpretations.
My understanding is that Taleb uses “black swan” to mean “random unexpected event that you didn’t know how to actually calculate for, but “Black Swan” to mean “unexpected event that was created by you not expecting that event. The difference is that if you believe that all swans are white that doesn’t increase the odds of black swans existing somewhere outside of your knowledge, the question is simply can you accurately guess how likely black swans are to exist. If you believe that housing prices in the US are unlikely to go down and develop a financing instrument based on your assumption, then that financing instrument changes the probability that housing prices will go down.
In Taleb’s view the 2008 financial crisis wasn’t a ‘black swan’, but a ‘Black Swan’ where the housing price crash was made inevitable by the assumption that housing prices wouldn’t crash.
I think a lot of people are confused by Taleb’s ideas, myself included. I was surprised when he said the 2008 financial crisis wasn’t a black swan event.
I didn’t know that there was a distinction made between “Black Swan” and “black swan event.” It’s been about 7 years since I read the book, so my memory on it is a bit hazy. But this paper doesn’t seem to make a distinction and it states:
If accurate, that is terrible communication, and the second swan should be called.. uhm. “Ironic Swan”
This is my attempt to synthesize Taleb’s work and distill several books into a short reply.
From my Kindle edition of The Black Swan, in the prologue (typos likely mine)
Before the discovery of Australia, people in the Old World were convinced that all swans were white, an unassailable belief as it seemed completely confirmed by empirical evidence. The sighting of the first black swan might have been an interesting surprise for a few ornithologists … but that is not where the significance of the story lies. It illustrates a sever limitation to our learning from observations or experience and the fragility of our knowledge. One single observation can invalidate a general statement derived from millennia of confirmatory sightings of millions of white swans. . .
I push one step beyond this philosophical-logical question into an empirical reality, and one that has obsessed me since childhood. What we call here a Black Swan (and capitalize it) is an even with the following three attributes.
First, it is an outlier, as it lies outside the real of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact (unlike the bird). Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.
A few pages after that there is the best individual line the supports my description above
Isn’t it strange to see an event happening precisely because it was not supposed to happen?
There was a finance community assumption that because housing prices hadn’t previously declined all across the country at the same time in quite a few decades, they would never decline all across the country at the same time.
Ironically, in 2007-2008, housing prices didn’t initially decline all across the country at the same time. Instead, what happened was that housing prices had inflated so high during the Housing Bubble in California, Nevada, Arizona, and Florida that those four states came to represent a huge chunk of total housing market values, so much so that those four states by themselves represented a huge bet that new mortgagees would be able to pay off their big mortgages.
When housing prices started declining in those states late in 2006, that started tumbling various financial dominos, beginning with the shakiest Orange County subprime mortgage firms in the late winter of 2007, moving on to more respectable sounding mortgage firms in August 2007 and prestigious Wall Street and European firms in 2008. The financial crash of September 2008 set off a general recession which, finally, led to housing prices falling nationwide. But the damage had been done before the last parts of the country saw falling housing prices.
Yet in retrospect there was nothing one-in-a-million about this chain of events. It seems pretty pre-ordained due to the national infatuation with believing that Diversity Is Our Strength.
The Bush Administration had campaigned in 2002-2004 as part of its Increasing Minority Homeownership initiative for lower downpayment and documentation requirements for mortgages on the grounds that traditional credit standards were bigoted against minorities. At the White House Conference on Increasing Minority Homeownership on 10/15/2002, President Bush told his federal regulators that he wanted 5.5 million additional minority homeowners by 2010 and he didn’t want traditional credit standards, such as substantial down payments and transparent documentation of incomes, to stop his social justice initiative.
But it turned out that traditional credits standards were traditional for good reasons.
Working with this type of estimates in probability distributions is essentially what I do for a living, and I’m comfortable that I have a very good handle on the arguments made in the paper. I’m happy to try to explain any of the math that seems off. The major thrust of what they did is known under the name of a posterior predictive distribution, and wikipedia does a pretty good job of explaining what’s going on in general.
No, but the mathematical argument is quite simple, and the error in reasoning is fairly common. Here’s a different example:
There’s a time-dependent variable you’re tracking, like the price of a particular stock. You know that the variable is autoregressive–that is, the price on day i is correlated with the price on day i-1–and so you fit an autoregressive model to the historical data you’ve recorded.
Now you attempt to predict the future. You have two options: you can use the best model you’ve found, or you can use a mixture of all possible models, weighted with how likely they were to fit the historical data. The first is much simpler, and often can be done in closed form (that is, with pencil and a small amount of paper), and so is often what people do. The second, however, is necessary if you want to get the tails of your distribution right. Using just the ‘best-fit’ model threw away all the uncertainty you had in the parameters (it may have fit the data better than a model with very different parameters, but perhaps not all that much better), and that uncertainty is driving the behavior of the tails.
[For example, see this illustration of credible models with Bayesian linear regression, which is attempting to depict that uncertainty in the model parameters.]
I’m also uncertain how useful their method of prediction is. Or rather if its result is all that much more intuitive/useful than the typical point estimation. Or a simple error bar on that estimation. (the whole p-value vs Impact size vs confidence interval debate)
I do think that this is a good example of making sure the metric is asking the question we want answered.
Some possible real-world examples of this I think would be more useful in understanding it would be:
Change Drake to model how likely a common retail appears within a block/city (ie. point estimate and distribution estimates of Starbucks/McDonalds within a city)
Another Drake-like equation I remember seeing is estimating the number of desired suitors given certain preferences and criteria.
OOT, I also have some reservations of whether the distributions are independent of each other. As in, given that habitable planets are super common, the distribution for longevity of a interplanetary civilization might be much different if habitable planets are rare. No idea how to usefully model this or how much it’ll impact the point estimate compared to this method.
Glad to have gone from my prior of “eh, the math of that paradox is probably wrong in a way I don’t understand” to this posterior “eh, the math of that paradox is probably wrong in a very specific way I don’t understand”!
Check out Sniffoy’s reddit comment – I found it much clearer than Scott’s explanation. Basically, the Drake equation is designed to calculate the average number of civilizations we should expect to see, but that number doesn’t really matter; what matters is the probability of our civilization being the only one in the observable universe. The paper calculates that probability as about a third, so if we look out into the void and detect no alien life we shouldn’t really be surprised – after all, there was a 1-in-3 chance of us being alone.
You could respond that you don’t understand how they calculated the probability of us being alone in the universe, and that’s entirely fair (Lord knows I don’t have the faintest grasp on those calculations) but you don’t really need to be able to do the math yourself in order to understand why that math dissolves the paradox. All you need to understand is that if you run the calculations, it turns out that the probability of us being alone is 1-in-3. It’s not paradoxical that a 1-in-3 chance panned out, so no further explanation is necessary.
I SUCK at anything involving maths on even the most basic level, so I’m having a hard time visualizing what this means in practice. I mean, I know what an average is, but I don’t understand how the estimates for the average number of civilizations expected wouldn’t affect the probability of their only being one civilization. How are they at all totally independent things? It sounds like the paper is just combining a load of different versions of the Drake equation with different parameters into one. I’d need some examples in order to understand it.
The paper says this:
Does “interval [0, 0.2]” mean that each parameter can have probabilities in that range?
average number of civilizations expected wouldn’t affect the probability of their only being one civilization. How are they at all totally independent things?
Here’s a similar problem: if the lotto jackpot gets really high, the expected payout may be more than $1, but the odds are still very high that you are going to get back $0. Even as the lotto jackpot doubles every week because no one wins and the expected payout continues to rise, you are still very very likely to be getting back $0 from your ticket.
A bunch of low-probability but high-return results can blow the expected value out of the water without changing the chances that you “win”.
That’s a good example.
Alternatively, Scott’s example is actually decent, it’s just a bit brief in the original post.
In this example you can really see the difference between the average number of expected civilizations (5 billion) and the probability that we’re alone (1-in-2). It’s not that the two numbers have nothing to do with each other, it’s just that the relationship isn’t as straightforward as might be naively anticipated.
Suppose the probability of alien life around a star is the product of two terms:
P = p1*p2
You don’t know what p1 and p2 are. But you think that they are either 0 or 0.2, each with a probability of 1/2 (and independent).
Then with probability 3/4, one of p1 and p2 are zero, and therefore P=0 and there are no aliens; while with probability 1/4, P = 0.04 (4% of stars have alien civs).
On the other hand, you could have said, “On average, I think p1 = 0.1 and p2 = 0.1. Therefore P should be about 0.01, and so about 1% of planets should have life. Then, given that there are >trillion stars in our local group, it’s really surprising we haven’t seen anyone”.
But that 1% “on average” is actually the combination of a 75% chance of no life, and a 25% chance that 4% of stars have life. So it’s actually not surprising we don’t see anyone. That’s the more likely case!
(This is basically their example, except they have a uniform interval over [0,0.2]. But the binary case is easy to compute directly, and this is the intuition for their result. If you’re bothered by the fact of our existence in my example, replace 0 with epsilon).
TL;DR: We are alone in the universe if one of the parameters in the Drake equation is (approximately) zero. Since there are many such parameters and we really don’t have very precise estimates for their values, there is a very non-neglible likelihood that at least one of them is sufficiently close to zero.
My money is on the step going from basic organic compounds (amino acids) to self-replicating structures. There’s a lot of hypotethizing and theories, but AFAICT, nobody has any even suggested any coherent step-by-step explanation. It still looks like shaking together a bucket of cogwheels and expecting a functioning wristwatch to fall out of it.
Wait, is this a fair description of the findings?
In that case one of the parameters being (approximately) zero is equivalent to to the great filter existing.
The great filter was already the simplest (and most prominent?) logical conclusion drawn from the Fermi paradox, So though the statistical point is nice it doesn’t add much to the discussion.
Edit:
I think I see where I went wrong. The two are tangential.
great filter reasoning already either already took this into consideration implicitly or just went at it from the other direction – We see no evidence of life, therefore at least one of the values of the Drake equation is lower than we think.
While the paradox was supposedly that we didn’t think any variables are so low (but actually just ignored wide error bars in our estimations)
Your TL;DR is basically the old way of viewing the Drake equation as a product of point probabilities. And even there it would be possible for many factors to play together to depress the overall probability. The innovation is making this interplay of many factors mathematically rigorous, which shows that the probability of each step might be not particularly close to zero, but the overall probability is.
At least that is my take on it. Maybe somebody with more statistical chops can comment.
Depends on what the “old way” is. If it is estimating values and multiplying to find the average or expected number of civilization, then I don’t agree. Asking “what is the expected number of civilizations” is different from “what is the probability of zero civilizations”.
You could get sufficiently close to zero by setting two parameter as one in a million, instead of one as one in a trillion. But somehow it feels more likely to me that we miss some particular stumbling block for one of the parameters, causing us to underestimate it by a few more orders of magnitude.
As I understand it, the work here used MC simulations to explore the probabilities of Drake’s equation. How did they specify the probability distribution of each parameter? I.e., what is the likelihood of abiogenesis happening at p=10^-6 vs it happening at p^10^-12?
The Monte Carlo sims picked random values on a log-uniform distribution.
I’m not quite sure why log-uniform is the right choice. If I think something has a value between one and a hundred, I don’t usually take that to mean it’s as likely to be below ten as to be above it. If they had picked a linear-uniform distribution, the final probability would have been substantially higher.
My thought experiment of one to a hundred may be flawed — when the various estimates vary over ten or twenty orders of magnitude, I guess what you’re really saying is that they are guessing about the exponent, not the value.
Disregarding the point about the actual topic of the article…
Ketil, I disagree with you about which step(s) are likely to be very low probability.
I remember once reading a short article about a rule of thumb of calculating confidence limits for a duration given only a single partial duration. The example given was : human civilisation has existed for x years. It therefore has a 95% chance of continuing to exist for at least x/y and at most y*x years. Unfortunately I don’t remember the numbers. ‘y’ could have been 5, or 18 or 95 or something.
Does anyone know more about that?
Anyway, the point is that even with only one example of something, we can still get a very rough approximation for the likelihood of an event happening.
So, life is believed to have originated on this planet pretty much as soon as conditions became suitable for it. Might have taken a million years of planet-wide randomness; who cares – it seems very likely to occur pretty soon on Earth-like planets.
(Obviously we don’t know how many Earth-like planets there are, but just recently methods with the ability to detect planets have become sensitive enough; they seem to be finding lots of planets. So there’s probably plenty.)
However, the development of eukaryotic life (apparently a necessary precondition for complex multicellular organisms) took 2 billion years – approximately half the timeline. That suggests that it might be a low-probability event.
To get to the Cambrian explosion (when animal diversity really took off) took another 1.5 billion years, so perhaps there’s another rare event in there.
I suppose this argument implies that the development of functional intelligence isn’t enormously unlikely; it happened within a few hundred million years.
A reasonable argument is that these apparently slow events may not actually be unlikely, however a gradual process needed to occur first to change the conditions. These may or may not be true, but they don’t change the fact that things which happen practically as soon as they could were probably quite likely to happen sooner or later.
I like the idea that in a few hundred million years our descendants are talking to aliens all around the galaxy and scratching their heads over why we were so worried about being alone.
This necessarily assumes something about the prior probabilities – and even to work as an approximation, it assumes something like that the prior probabilities are not to extreme.
It also assumes that the observer can be considered randomly chosen from all past and future humans (and simulated humans we may turn ourselves into after the singularity). Let’s say there’s a significant (prior) chance that an intelligent civilization goes on to exist until the heat death of the universe (or whatever). For most of that time it will be extremely scientifically, technologically and philosophically advanced. Someone in such a world will think about these questions very differently if at all. The point in the history at which you exist certainly affects your thinking. Is it reasonable, then, to consider yourself a randomly chosen observer?
Another way to say it: Take a universe where most civilizations go on to exist for billions of years. A person living at a time when his civilization is a few thousands of years old will still say the same thing that you (Loris) just said (but he’s wrong). Therefore the fact that you said it gives no evidence whatsoever that our universe is not one where our civilization is likely to exist for billions of years.
(I’m not sure if my argument here is correct.)
Please don’t read too much into the example given in the article I vaguely remembered. It kinda relates to the current discussion, but apparently only in a distracting way.
I’m not talking about how long civilisations exist on average in the universe (although, yes, that is potentially relevant to the Fermi paradox).
What I’m actually saying is that if something has been observed exactly once, and happened basically as soon as it could (the origin of life), then it’s likely to be a ‘probable’ event. On the other hand, if it seems like something could have happened at any time, but took over a billion years to do so (i.e. eukaryotic life; complex multicellular organisms), it’s a candidate for an improbable event.
And the other explanation for these very long stages – that precursory change is happening, but it takes a long time – is also interesting. For example, if a persistent high concentration of atmospheric oxygen is necessary to create the niche for eukaryotes to flourish (and actually, given other data I think that’s likely), then the window where eukaryotes could have evolved- but had not is very much smaller.
If that were true thoughout, it would suggest that actually there are potentially lots of aliens, but they’re all at about the same stage as us. Because even though it’s common that’s basically how long it takes. We’re not the first, we’re typical – but most of these civilisations are in a bubble before they make contact.
You’re probably thinking of the Doomsday argument.
Actually, no. Well yes, sort of. That’s got an additional layer of geometric increase in population to affect the result.
I think it’s like the German tank problem.
Looking in to this some more, I believe the model is the same as the frequentist analysis, for the case with one tank. Eyeballing the equations on the wikipedia page without worrying too much about the derivation, I think this fits with the 95% confidence interval being the range x/39 .. 39x, where x is the previous duration.
I wrote a paper in high school in c. 1975 using the Drake Equation to prove that aliens would show up Real Soon Now.
Perhaps the Drake Equation needs more factors than I included in my calculations 43 years ago?
For example, the Bush Administration put together a sort of Drake Equation for their demand for 5.5 million additional minority homeowners, but the Bushies refused to consider a factor that perhaps Hispanic immigrants weren’t as good credit risks as white Americans. And indeed, immigrants defaulted at a about three times the rate of white natives. But that fact wasn’t allowed in the equation because it would be Immoral to consider such realities.
Dude. Stop trying to force the conversation.
Banned for two months for derailing unrelated conversation to culture wars
Man, undo this
He’s one of your most consistently interesting commenters
——
Also, his viewpoint represent an epistemologically important addition to your comment section
While I don’t want to attack someone who was just banned, I must disagree:
No. No he isn’t. He was just banned precisely because he isn’t an interesting commenter, and attempted to make yet another conversation into the same conversation he always wants to have.
If anyone is interested in fun great filter literature, read Revelation Space.
Not that I believe that particular solution. I think three factors Drake equation optimists always underestimate are multi-cellular life, solar instability, and the incremental utility of intelligence. The last one I think people generally have always overestimated. Just within the Apes its not evident that Chimps, Gorillas, or Orangutans becoming more intelligent (and basically increasing calorie requirements by 25%) would increase their ability to survive (aside from increase sympathy from humans).
Outside of the Ape family we see many instances of intelligence, but it would likely be hard for those to incrementally evolve higher intelligence. 3 Groups that immediately come to mind are Corvids, Whales, and Octopus. Aside from the massive calorie investment, 2 of those 3 have significant problems. Corvids cannot increase brain size substantially without losing flight; whales cannot increase their dexterity significantly without increasing their drag in the water. Smart octopuses would seem the way to go, but they also present challenges. They are still water creatures which complicates everything from metallurgy to agriculture. They currently are not very social so cooperation will be difficult to develop. ETC ETC ETC.
TLDR? One of the conceits of humans is that, being intelligent, we assume intelligence confers significant evolutionary advantages. However, this hasn’t even been true for the entirety of human history.
I always thought the main problem with octopuses (which perhaps you include under “not very social”) is their reproductive strategy, which involves laying a very large number of eggs then dying basically as they hatch. This means that they cannot teach anything to their young.
That and the ~1 year lifespan
The more intelligent mammals tend towards longer lifespans so I’d guess that an evolutionary corollary of increased intelligence would likely be greater longevity?
Eh, too many confounders (body size, diet, taxonomy, life history). Body size is a big correlate for longevity, and intelligence could be related to either or both or neither.
The really informative points for an intelligence-longevity link would be tested intelligence between closely related species with radically different longevity, such as mice/rats (~2 years) and naked mole rats (~32 years).
Are you willing to say that the probability that creatures of intelligence similar to apes develop human-level intelligence is less than 10^(-n) with n=5 or 10 or whatever required to significantly change the calculation? Speculation like these works for producing small probabilities in the everyday sense of “small”, like a few percent. Not, IMO, to produce astronomically low probabilities relevant here.
The evolution of intelligence seems to be more-or-less continuous to me. Vertebrates are generally more intelligent than invertebrates, mammals and birds more intelligent than other vertebrates, primates more intelligent than most other mammals etc. The fact that high intelligence (compared to related groups) seems to have evolved independently several times (primates, cetaceans, corvids, parrots, octopi) also suggests that it’s likely to happen.
I’m willing to say, “I don’t know”, but I do feel like people seem to generally treat it as a fait accompli that Dolphins and Corvids would make large gains by developing Ape-like intelligence.
I do think that if you have diverse populations of Ape-like creatures, that have both dexterity and intelligence, over long enough time spans eventually one will probably stumble into a situation where there are enough calories available to support even greater brain development combined with evolutionary pressures to do so. But I do disagree with the linear development of intelligence theory. One reason for including octopi is to kind of illustrate that.
Moreover, I don’t necessarily think that there seeming to be more intelligent birds and mammals than cold blooded animals is related to when those two genres evolved, but rather to the advantages of being warm blooded. Brains are fickle things and the smarter the brain the more fickle. Temp fluctuations probably aren’t the best if part of your evolutionary advantage is leveraging brainpower.
Plus, we don’t really know what the intelligence level of Permian, Triassic, Jurassic, and Cretaceous animals were. Sharks then could have been smarter and more pack oriented, same with pelycosaurs, therapsids, archosaurs, dinos, plesiasaurs, etc. Thus, it may just be that following each extinction event it becomes easier for warm blooded creatures to evolve into niches where cold blooded creatures previously dominated because its more adaptable so long as calories can be found. A penguin just needs to smash calories down its gullet to avoid freezing to death. A frog that needs to take over for the now-extinct cold-water amphibian has to evolve much more.
“But I do disagree with the linear development of intelligence theory.”
I think your ape example illustrates it. Take an Ape or even a lower primate. It’s probability of stumbling onto higher intelligence is MUCH higher than a bacteria’s or early animals. So higher intelligence seems to be a series of small, decent probability steps, and not one impossible step.
I agree with you on Octopus. Well sort of, I find it at least plausible that “Water Worlds” may at some point prove to be a blocker for higher intelligence. For instance, I’m not sure you could get electronics working even if you had a society like 19th century London underwater. Would it be possible to launch a rocket under such conditions?
However water worlds, impossible gravity wells, high oxygen atmospheres, etc. are a seperate category of Fermi Paradox solutions. If we’re just arguing about higher intelligence (after the Cambrian Explosion/Eukaryote) step, it doesn’t seem that unlikely.
Actually gravity wells are a neat topic on their own. If Earth’s gravity were lower it’d be much easier to launch spaceships and we’d probably be where 1950s Science was predicting. That Earth’s gravity is so large seems like it’s at least slowing down Earth’s expansion. SuperEarths will have a bigger problem with this. It’s likely not insurmountable but it’s a blocker.
As far as the Permian. Even if Permian animals were the only big jump in intelligence(and it seems likely that there were at least several more such as endothermy) that ever happened it’s still a big jump/stepping stone that dramatically increases the chance of later intelligence. Unless you’re going to argue there’s sentient bacteria at some point in the fossil record.
I also take some issue with the fossil smarties idea. While we can’t test live Ichtyosaurs what we can do is use living animals, come up with a smartness predictor like the encephalization index, and apply it to various sorts of living animals to see how good it is. Then if/when we have an accurate model we can make pretty good guesses to the intelligence of fossil taxa.
That’s not to say we wouldn’t probably miss species like the New Caledonian Crow. However, I think it would be near impossible that we’d miss
widespread groups like Cetaceans, Primates, Carnivora, etc. I mean we do have early mammal fossils afterall and have a pretty good sense they were likely smarter than lizards from the same period.
Can you assume intelligent life will form on planets with significantly lower gravity than Earth has?
It’s definitely hard to test empirically. Titan has gravity a tenth of Earth’s and the consensus seems to be it looks like it’d form Earth-like life if only it was warmer.
(actually Titan to me looks like it’s the best candidate for life in the Solar System period. I don’t get all the relative obsession with Mars/Europa and ignoring Titan. Titan is Europa (underice ocean) + a surface)
Enceladus sounds like a potentially interesting place to look too.
I was actually going to mention Enceladus(along with Mars/Europa) as the other place that gets all the attention while Titan is left wanting.
Enceladus may be a nice place to test, if you can somehow test the geyser. For Titan though all you need is a lander.
Based on the short list of smart animals, you could imagine this simple model of intelligence. There are two ways to get smart. One is to be very dexterous (e.g. octopuses), so that there is an advantage in having a brain able to model your physical environment very accurately. The other is to be social, so that there is an advantage in having a brain able to model your peers. Humans are the only ones that are dexterous and social, and so are the smartest.
Generational communication. How quickly we forget the importance of crystallized intelligence, which consists of speech, memory, and a written medium.
So while socialness of a sort is probably necessary, it doesn’t have to go to the extreme of modeling one’s cohort. Mere mimicry is probably good enough. (And it’s not as if prey species, which consists of all species at one stage of development, or another, don’t model predator species. And vice versa. Whether social or not.) (And come to think of it, don’t all species which mate or compete for resources model what members of their own species would do? It’s how you convince a fellow to back off your territory.)
Elephants are pretty dextrous and social. So are bears, corvids, parrots, etc.
I think there is a serious problem in science (and philosophy) of simple short arguments not getting recognized because they aren’t enough for a full paper.
I suspect any number of people had this general idea but didn’t have the time or expertise to build out a paper length analysis and as such the people who did have that expertise weren’t exposed to the idea.
This is more the usual problem of the media and people like Scott Alexander hyping a result they are not very informed about. It’s not just that “any number of people had this general idea,” but rather that the entire field had long ago fully synthesized this idea in their criticism of the Drake equation in the 1960’s. It’s for this very reason the Fermi paradox is not considered an actual “paradox” outside of popular culture. The point of the exercise is just that the uncertainties on the terms in the equation are too large and that we can currently only speculate which of the terms is very close to zero.
Which “field”, specifically, are you referring to, and how would you define its borders or membership? In my experience, the study of possible extraterrestrial life and civilization is too fragmented to constitute a “field”, which makes it difficult to justify the claim, “everybody in the field already knows this”.
What I mean is that I’m a physicist who has had discussions with a wide variety of experts in “Fermi-paradox”-adjacent fields, and I have never in my life met someone who thought that the “Fermi Paradox” is an actual paradox. The term is always used to refer to the constellation of questions surrounding just how uncertain we are about the terms in the Drake equation, the uncertainty surrounding the knowledge that at least one of the parameters is really small without knowing which one in particular it is, and existential questions raised by anthropocentric parameters being small, and the possibility that there is extra terrestrial life that is just difficult to detect. The sort of exercise done in this paper really is redundant with numerous discussions I have had, and there are books and papers since the 1960’s that have shown (since it is trivial) that if you put a pessimistic number into any of the terms in the Drake equation, that the probability can be made close to zero, so that there “obviously” isn’t any real paradox.
OK, so our working definition of “the field” is, “people orin has talked to about this”. Maybe consider that you are working in, or have created, a bubble?
I’ve talked to plenty of people in related fields who consider the Fermi paradox to be a reasonable subject for discussion. Scott has cited some quite respectable proponents of that view, along with publications in peer-reviewed journals in relevant fields. A quick search finds fifty-two abstracts in ArXiv which cite the Fermi Paradox in the abstract alone, of which one is Sandberg’s and one seems to take your “there is no Fermi Paradox” position.
There is, perhaps, something that might be described as the field of ETI studies. And then there’s the people you hang out with and imagine have the last word on the subject. These aren’t the same thing.
I always figured that they just over estimated the chance for intelligent life, and the chance for an industrial revolution type big event.
There are millions and millions of species on this earth, and while a lot of them are developed similar survival strategy (be strong, be fast, have A LOT of kids), none of them developed “be smart enough to build a complicated civilization” as a strategy. I guess you can make a case for neanderthals being as smart as humans, but I don’t know enough about that to really argue with that. I remember reading somewhere that they had weaker abilities then us when it came to language, which definitely falls under the definition of weaker intelligence for me. If anyone know more about that, please step in.
Anyway, whether you count the neanderthals or you don’t, you still only have one or two examples of intelligent life developing, and I don’t think that’s enough to extrapolate about the odds. If you throw a d20 twice and get a 7 once, it doesn’t mean you have a 50% chance of getting a 7 when you throw it next.
The other thing is the industrial revolution: once you have a smart enough species, who’s to say that they would build a society that have enough spare resources to allow space travel? To me it seems that only something like the industrial revolution could afford such spare resources, and that happened only once. We feel like it was bound to happen, but how can we know? Maybe that kind of event is rarer then we think it is.
Posted sort of a response to the “Rare Intelligence” part below but I’ll take a whack at “Rare Industrial Revolution” as well.
So let’s look at human progress over the last 5 million years.
5 million – 10,000 years ago: https://en.wikipedia.org/wiki/Stone_tool
Basically there’s a progressive development of technology. There wasn’t just “basic stone tools” and then “the agricultural revolution” and then “the industrial revolution”. It was more like punctuated equilibrium, yes there were some big advances but there was also a long slow march towards better tools.
Going from this ~3 million years ago: https://en.wikipedia.org/wiki/Oldowan#/media/File:Canto_tallado_2-Guelmim-Es_Semara.jpg
To this ~10,000 years ago: https://en.wikipedia.org/wiki/Stone_tool#/media/File:Arp%C3%B3n_con_microlitos.png
So you can see even before the agricultural revolution people were slowly but surely getting more materially prosperous.
Now, one great argument on my side would be “We have at least 3 distinct agricultural revolution events, in China, in Iraq, and in Mexico” However, I find it likely that this prior technology was sort of a “necessary step” and so the proto-Mexicans in Beringia were already significantly closer to agriculture 10,000 years ago than say the Australian Aborigines. Still, that 3 distinct cultures (with this technological base) invented agriculture does suggest maybe, with human biology at least, the agricultural revolution was sort of inevitable.
So now for the Industrial Revolution itself. The Industrial Revolution sounds like a grand concept but if you actually look at European history at that time it wasn’t “People defending themselves from the vikings” to “Cars and the Internet” it was more:
1.) Scholastic Monks like https://en.wikipedia.org/wiki/William_of_Ockham
2.) Renaissance Scholars
3.) Intense era of Global Trade. For instance the Portugese conquered Dubai around 1500 (it was a brief conquest). There were incredible advances in the age of sail, etc.
4.) Geniuses like Leibniz and Newton in the 1600s.
5.) Enlightenment scholars.
6.) A complex system of international finance
7.) Then many things we now think of as the industrial revolution.
So it wasn’t a one off, it was sort of a gradual development. Now could this development have been stopped or slowed down? I don’t think it could’ve been stopped but I think it could have been slowed down. For instance, I can’t help but think, based on the earlier post about the Dark Ages actually starting roughly with the rise of the Roman Empire and stopping (or progress accelerating again) right after its collapse, that if the Romans had been less successful and Libertarian Paradise Carthage survived then we’d be much more advanced. Still a hundred year slowdown is nothing in terms of the Fermi Paradox.
Now what if everybody in Europe died? Just based on what I know from reading quite a bit of alternative history here’s some decent guesses for how long “Civilization X” would take to get to roughly the equivalent of the Industrial Revolution:
Middle East/China/India:
There’s an excellent book on this topic, https://en.wikipedia.org/wiki/The_Years_of_Rice_and_Salt. It estimates it’d take ~200 more years which sounds like a plausible estimate to me.
Sub-Saharan Africans/Bantus: ~2000 years. They had pretty advanced metallurgy/iron tools and the like.
MesoAmerica (everyone in the old world dies): ~5000 years. Think of the Aztecs as being on the same level as Sumeria roughly.
Non-MesoAmerican Natives or less advanced Africans like the Khoisan: This is hard to tell of course. My guess is ~10,000 Years.
Australian Aborigines: Not to be racist but I think this is the strongest argument for “Industrial Revolution isn’t inevitable” I think it may have taken them another 40,000 years or more. Does anyone know if there are any documented advances in Aborigine culture over time?
So those are rough estimates. For humans, once they got to the human intelligence stage, it seems like the industrial revolution was inevitable. Slow, but inevitable. Again 100,000 years or more means nothing when discussing the Fermi paradox as we have plenty of star systems with over a billion years head start on us.
Now, maybe you think it’s human biology (Thumbs!) that’s the key step. To that I say maybe, but one we’re specifically discussing the Industrial revolution step here, and two of the reasonably intelligent species on earth (Crows, Primates, Elephants, Cetaceans, Cephalopods) 4/5 of them have something like a probiscus that can be used for tool manipulation.
I took a wack at the Rare Intelligence Hypothesis below so I’ll take a wack at Rare Industrial Revolution here.
Let’s assume for the sake of argument, since we’re only looking at Rare Industrial Revolution here, that we already have a creature with the intelligence and dexterity and atmosphere/coal deposits necessary to kick off the industrial revolution. Water words or dangerous oxygen atmospheres would be seperate filters.
So for humans we notice an immediate trend, long before the agricultural revolution, in technological progress.
Humans went from this 3 million years ago: https://en.wikipedia.org/wiki/Stone_tool#/media/File:Chopping_tool.gif
To this 10,000 years ago:
https://en.wikipedia.org/wiki/Stone_tool#/media/File:Arp%C3%B3n_con_microlitos.png
Then the agricultural revolution occured in at least 3 seperate events. Mesoamerica, Iraq, and China. Now, I actually think that the agricultural revolution required some degree of technological progress before it could take place. So while the Mesoamericans pulled it off, they had all the technology the Beriginians did. If instead of Beriginians we populated Mesoamerica with say modern humans from ~50,000 years ago, I think it would’ve taken them longer than 10,000 years to invent agriculture.
Now let’s look at the Industrial Revolution. It definitely wasn’t a one off event. It didn’t go from “Knights to Cars”. Instead it was more clearly like this:
50 AD: Roman/Greek Scholars
500-1200 AD: Monks such as https://en.wikipedia.org/wiki/William_of_Ockham
1200-1500 AD: Renaissance Scholars. Capitalist city states such as The Hansa and the Italian Merchant Republics.
1500: Era of Global Trade. Portugal conquers Dubai (in order to do so they needed highly advanced sailing technologies)
1600s: Newton and Leibniz. Newton’s quote “I’ve seen higher only because I’ve stood on the shoulders of giants” seems highly applicable here.
1700s: Enlightenment. Queers in drag hold many fancy intellectual discussions and start up a new country based on those discussions.
1800s: With all the gradual buildup in technology small incredmental advances(in theory) with huge impact(in practice) are able to occur.
Now, you may say “That’s just Europe”. To which I don’t want to get bogged down in the details of how the Mughal Empire maybe could’ve done it. Instead I’ll make some estimates, based on reading a lot of alternative history, for how long the development of the Industrial Revolution would’ve taken across societies(like say if a plague killed all the Europeans):
Middle East/India/China: There’s an excellent book on this called https://en.wikipedia.org/wiki/The_Years_of_Rice_and_Salt
The book estimates it would take ~200 additional years. I find this plausible.
Sub-Saharan Bantus: I estimate ~2000 years. They had iron tech and agriculture down.
MesoAmericans: ~5000 years. They were at roughly the tech level of the Sumerians.
Non-Meso-Americans/ African groups like the Khoisan: ~10,000 years. Cahokia looks pretty similar to stuff in Turkey from ~10,000 years ago.
Australian Aborigines: Here’s where you may have got me. I find it plausible they may have been “stuck at a tech level”. Does anyone know if there’s evidence that Aboriginal society was notably more advanced when Europeans showed up than it was ~40,000 years ago when they showed up on the scene?
Now, I will say it’s plausible technology can be delayed. Like after reading the last dark age post (where it’s argued the dark ages were more 0-500 AD than 500-1000AD) I find it plausible the Roman Empire basically destroyed the advancements within the classical world. Get rid of the Roman Empire and maybe Capitalist Carthage would’ve circumnavigated the globe in the 500s (they made it to Senegal anyways). We’ll never no. But I’ll point out a 500 year delay means absolutely nothing when we’re discussing the Fermi Paradox and planets that have over a billion years head start on us.
Now just to disclose where I am, I actually think the great filter is “Future technology that we can’t foresee explains it. It turns out this tech makes Universal expansion a really dumb idea.” Maybe this tech is dimensional rifting/ going to parallel Universes. I don’t know, but it sounds much more plausible to me than any other step in the great filter.
MesoAmericans: ~5000 years. They were at roughly the tech level of the Sumerians.
I’ve heard some disagreement on this – some of their tech was textiles, which didn’t hang around as well as pottery or iron.
I hear you, it’s pretty clear though that they were no more than ~10,000 years behind.
In some sense (obsession with Jade) they also more closely resemble early Chinese civilization than the Sumerians.
It’s also possible that future technology we can’t foresee is being actively deployed to mess with our attempts at detecting intelligent life. Sufficiently advanced aliens could do a pretty good impression of a Cartesian evil demon, were they so inclined.
I find that somewhat plausible. The problem is it’s not testable. I also sort of buy the idea of “We’re running on a simulation and it’s saving memory. The stars are just sort of supposed to look like stars but nothing’s really going on there until we actually visit it.”
Unfortunately this seems untestable. Evidence against it might be like actually being able to explore other star systems. If we’re in an enclosed sphere or something the Voyager probes should hit it at some point. However, if it’s just saving memory you can’t really test that because the simulation will “look natural” when we get there.
I feel like this would be a good jumping off point for a novel…
Yeah, for extremely hard to test hypotheses like that we’re pretty much stuck relying on priors, or at best Holmesian elimination of the alternatives.
For what little it’s worth I think something like simulation is extremely likely, and something like a Culture GSV parked in-system messing with us moderately unlikely.
First off, greetings to everyone as first-time commenter.
Ad rem: it strikes me that the Fermi Paradox is just as easily explained by the realization that it depends on the Justice Potter Stewart Postulate: “I know [evidence of intelligent extraterrestrial life] when I see it”. That seems to me a terribly optimistic assumption.
By default, we ascribe everything we observe to natural processes, which is quite sensible (because Occam). However, it is at least plausible that some of the weirder properties we find (obvious theoretical stop-gaps like Dark Matter and Energy are good candidates) are in fact evidence of advanced alien life that we don’t recognize as such. Granted, we have no good reason to think this is the case (hence Occam), but we also cannot completely dismiss the possibility.
Taken to the extreme, we get Stanisław Lem’s satirical take on the matter: that the observed expansion of the universe is the result of highly-advanced, astro-engineering civilizations blowing it to pieces. Men kick over molehills and not stars, because it’s all they can do – for now. Give us time and we’ll show our galactic brothers a thing or two…
It’s worth remembering that all our guesses on what intelligent alien life might be like are extrapolating from a sample of one (us). Even accepting this, it seems likely that there’s a pretty narrow window for detecting alien civilizations with the means presently available to us – both spatially and temporally. I believe we may already be past the point where the Earth was a significant source of identifiable electromagnetic communications and it took us what, less than a hundred years? It seems very much a case of “blink and you’ll miss it”, leaving aside looking in the right direction at the time.
In other words, the paradox may well come from simply not seeing the forest for the trees.
The assumption is that we would be visited by self-replicating drones. If you can build such a thing it is a good move to send one out, let it populate the entire galaxy, and at the very least make sure potentially dangerous rivals don’t go unchecked.
To quote the Spartans: “If.”
For all we know, by the time technology advances to the point of being able to produce self-replicating, reliable, interstellar drones, the possibilities may be such that the whole idea is rendered obsolete (it’s not like it hasn’t happened to us in the past). That’s assuming it is possible to produce self-replicating, interstellar drones that would be able to reliably function on a galactic scale.
Also, they may simply not have reached us, yet.
Either way, the whole idea still runs into Justice Stewart: we’re assuming that if we were visited by alien self-replicating drones, we’d know. There are so many reasons why this assumption need not hold that it seems futile to begin to list them. I’ll restrict myself to pointing out that they’d have to be identifiable as such (by human standards of what is identifiable as technology/artifice), they’d have to be detectable by humans, the humans in question would require sufficient technological sophistication to understand what they were dealing with (alien tech) and so forth.
We don’t have any particular reason to expect that the above must necessarily hold for any past, present or future contact, therefore absence of evidence may simply be evidence that we’re asking the wrong questions.
I’m not saying that there are no possible explanations. People have been coming up with possible explanations all the time. I’m just pointing out that the paradox cannot be resolved by pointing out that other stars are far away and technological civilisations aren’t necessarily detectable over huge distances.
I can’t really see why not, but that’s besides the point.
The point is that we could be looking at evidence of advanced alien life right now and not realize this to be the case.
This argument feels a bit too general for me. You can dismiss anything we know about life in the galaxy by saying “Well, maybe the aliens are just so alien that they’re beyond our powers of observation.” If we did send out a wave of self-replicating interstellar probes and searched every star in the galaxy, you’d still be able to say “Well, maybe the other aliens ascended into the fourth dimension instead of building self-replicating probes.”
It’s sort of a “god of the gaps” argument. I can’t disprove it, but I can point out that the range of aliens we can see is slowly getting constrained to those who don’t emit EM signals, expand to significant portions of the universe, or use their abilities to intervene in the universe in a way we’d recognize as intelligent.
That is a good objection, although I must point out that we have not, in fact, sent out a wave of self-replicating stellar probes and waited for them to achieve any sort of meaningful penetration of the galaxy. (I jest, but objections to the existence orbiting teapots seem better founded if an at least moderately thorough search has been undertaken first.)
I think EM signal emissions are a good place to focus on (which is why I brought them up originally). Extrapolating from our sample of one, we can hypothesize that the “broadcast” period of civilization development need not be particularly long. While we still use a lot of EM comms, I’m under the impression that we’re not really beaming that much out into space anymore (if only because it is wasteful). By this I mean both the return to wired communications (primarily via the internet) and the use of relays (e.g. satellites) to keep our EM signals weaker and more focused. The change from essentially no EM broadcasts, to peak, to significantly reduced (if my understanding is correct) happened within a single century.
The takeaway here is that technological progress doesn’t neatly correlate with increased EM signals being broadcast. At best, we can hope to catch a glimpse through a narrow window.
It gets worse: space is full of EM radiation to begin with, so not only do we have to get lucky to intercept a signal (for reasons outlined above), but we have to be able to distinguish a signal from noise.
And here’s the rub: we can make guesses as to what a signal should look like, but we have no idea whether any of them are even close until we establish contact with an ET intelligence that uses EM radiation for communications and see how they actually go about it.
My point is that the Fermi Paradox hinges on the assumption that we’re interpreting our observations vis a vis ET life correctly. I believe this assumption to be fundamentally wrong and defensible only if the underpinning theory postulated direct contact as a matter of necessity. To the best of my knowledge it makes no such claim.
To address the “god of the gaps” issue more directly: the kind of paradox-relevant observations being discussed here (EM transmissions, self-replicating drones, etc.) necessarily require not just any kind of alien intelligence, nor yet merely a generic “advanced alien intelligence”, but an alien intelligence that is very similar to our own (or, at best, ones we can imagine) existing in just the right time and place to be spotted by us. That’s a lot of very-low-probability parameters in the Drake Equation.
It’s not “god of the gaps”. It’s “Where’s Wally?”
Put simply: recognizable intelligences (that is: those similar to us) may be hard to spot (as we were and are); easily-spotted intelligences may be hard to recognize (the point I’m hammering).
I could continue by speculating on variant biologies, astro-engineering technologies that are indistinguishable from natural processes at a distance, the possibilities of creating variant languages, logics and mathematics (each of these can be seen as a game and it would be foolish to expect that the ones we commonly use constitute some kind of universal constant, provided others can adequately model the physical world of their users), and how each of these factors could completely invalidate all our assumptions of what alien contact would look like, but I expect it would strain the patience of everyone involved.
A simpler and quicker alternative is to accept that the Fermi Paradox tells us absolutely nothing about the distribution of intelligent life in the universe.
Thing is.. I do not believe anyone would ever do high-gain signals.
There is an idea circling around astronomy circles that makes them drool at any and all advances in deep space propulsion. Because if you can get a telescope out to 70 AU in a reasonable time frame, you can put up a piece of tinfoil to occlude the actual sun, point your telescope at the sun, and use the gravity field of the sun as a focusing lens the width of the solar system. Gravity bends light (and radio) – be at the focal point, and you can see to the ends of the universe. You can also receive what is basically ham radio signals from anywhere in the galaxy.
One obvious thing to do with a telescope like this is to find every living planet in the galaxy. This is trivial, since you can certainly resolve them well enough to do atmospheric analytics.
Then at each focal point that is pointing at a lifebearing world, you leave a tiny self-maintaining transmitter sending your standard greeting. Once they start doing this kind of astronomy, they will hear it. And this transmitter barely needs to be transmitting at all – you are using the sun to focus the signal you are sending!
And anyone who has radio in the first place will be doing this in at most a few centuries. Given the timescales involved, that is basically “instantly”. So there is no point whatsoever to building monstrous transmitters – anyone with ears is going to grow extremely sensitive ears soon enough, so the ham radios running the loop for millions of years suffice.
@Thomas Jørgensen
Gravitational lensing works both ways. One of the other things you can do with it is to use what is basically a ham radio to send messages that a Marconi or a Tesla will receive in the target civilization. Or use what is basically an overgrown laser pointer to make your star blink green in a way that will attract the attention of a Ptolemy or an Aristotle.
So for no one to ever do high-gain signals, you need absolutely everyone to use gravitational lensing for passive reception only, for absolutely no one to send messages like “Hey, we’ve noticed that some civilizations run into problems and self-destruct between the Early Radio Age and the High Space Age, here are a few pointers that might be helpful”, or “Psst, guys, the Berserkers are listening, so keep it down!”, even though it would be cheap and easy for them to do so once they’ve got the telescope stations set up in the first place.
Not impossible, but it does seem unlikely. And then you get to the bit where everybody has to agree not to build Dyson shells.
How much energy would you need to make you star look another color? Even with gravitational lensing that sounds…far fetched.
I’m a mile out of my depth and only recall complaints about this that others have written (where I read them I forget). The consideration I recall is that you basically have to find the civilization first before this becomes effective, right? You can’t just make you star pulse a different color in all directions (cheaply), and if you can’t identify active civilizations you are just randomly buzzing planets which means the signal has to reach them at the right time for it to be even noticed, let alone a response (tighter window) and then you have to sit and wait 2x the distance of the planet in time to get a response.
To illuminate an Earthlike planet at 50 light-years distance, brightly enough that a sun-like star would “blink” by a full stellar magnitude and with an obvious color shift if you use e.g. a green diode laser, would require BOTE twenty-five kilowatts of radiated laser power. And an “antenna” gain of +228 dB, thanks to the gravitational lensing.
An optical telescope with a 40-kilometer mirror would also do, and those might be practical to build in extreme microgravity conditions.
Yes, you do have to know where the candidate planets are to make this work – really, you have to know where they will be in fifty years, but if you can do the one it just takes a bit of observing time to do the other.
But to do gravitational-lensing telescopy at all, you have to commit to “that star looks interesting – I will invest in a dedicated spacecraft to fly out to its unique focal region w/re our star system, billions of kilometers from the focus for any other interesting star, and take a look”. If you do that, then you’re only adding the cost of a 25 kilowatt laser and maybe a longer-lived power supply and telemetry system, to be able to send messages to any astronomers that might exist or arise in the potentially interesting target system.
True, at 50 LY, you don’t get any measurable payback for a century, but people frequently do things that don’t produce measurable payback ever, so asserting the heavens are silent because 100% of alien civilizations were put off by the delay seems presumptuous.
@ John Schilling
I feel like you skipped the more relevant half of the critique, that you have to signal another civilization within a very narrow band of their existence. Humans have only had telescopes for maybe 0.25% of our existence as a species, and have had the ability to blink a message back. If someone happened to blink Earth sometime in the last 100,000 years, and kept that blinking up for a full year what are the odds that someone of note (ie someone who could at least leave the information for the future to understand and believe) would have witnessed it?
It seems like if you wanted to increase your chances of contact civilizations to 50% in this manner you would have to put tens of thousands to millions of these attempts into practice.
It is not the century year wait that is the problem, it is the “Hundreds of milions of years wait while life evolves something with “thumbs” and a brain. Though you are correct, on the scale of an apex civ, morse-coding in green to every single living planet in the galaxy not blocked by gas or the core forever is probably not that unreasonable.
… This is a wonderful book hook. Priest in ancient egypt-equivalent notes that the sacred eye of the goddess is in fact blinking math. Cue esoteric cult. Cue esoteric cult with terrifying “magical” powers a century or three down the line…
Then they are a bunch of pikers who will never amount to anything. The universe belongs to the civilizations that, when they bother to send space probes out a thousand AU to set up gravitational-lens telescopes focused on an interesting star, also instruct them to send a “hello, world” greeting and never stop.
But this shoves the whole “we should see stars blinking green because it is reasonably cheap to do” hypothesis out the window. If you want to actually hit another civilization you have to set up tens of thousands to millions of them and set them to run indefinitely. If you want to hit another civilization and know that you have done so you have to set up tens of thousands to millions of watching stations as well, and set them up to watch indefinitely. Even if such a civilization existed you still wouldn’t notice them if a civilization crash caused them to miss their 1,000 year regularly scheduled maintenance, or a bad wig wearing loud mouth was elected planetary president and diverted the funds to building a ‘wall’ around their system to make sure any visitors had the proper paper work, or some crank faked an alien invasion in an attempt to get out of a recession.
Indefinitely is a hell of a long time.
Indefinitely isn’t that much longer than a year, for a piece of solid-state electronics sitting in vacuum at about 20 K. Build it right, and then don’t turn it off.
Isn’t it “build it right, aim it perfectly”?
Station keeping and the powersource for the laser are the trickiest bit These are way too far out in the black for solar, at least without stupidly huge.. oh, for fuck sake, of course they would cycle the mirrors between power and observation. Never mind.
Alternatively, Technetium decay in a magnetic bottle / nanoscale dust or diamond power cell setup only requires a topoff every few hundred thousand years…
I don’t know too much about gravitational lensing, but I read the Wikipedia page. It has a focal line instead of a focal point, so does that mean if you just aim in the right direction, you don’t need to worry about distance to your target?
If so, how big is the viewing area of der blinkenlights?
And in that environment you can probably do stationkeeping by using the mirrors as solar sails, attitude control by differential photon pressure, so no propellant or moving parts required. It’s actually kind of interesting to consider how you’d design a spacecraft for maximum life, and what the limiting factors would be.
Probably cosmic-ray damage to the microprocessors, which argues for making it as big and dumb as you can get away with.
Just want to bring up a thought I’ve had on the “rare intelligence hypothesis” that is to say that it’s possible animal/multicelluar life may be common but intelligence still rare.
Now I think this has merits, but on Earth among at least land vertebrates, there seems to be an “Increasing degree of Intelligence.” To use a computer science analogy, right now we can build pretty complicated neural networks but we’re building on what we did in the past. We couldn’t go from early computer to Deep Learning, intermediate steps were required and once those intermediate steps were complete it became easier to get modern Deep Learning.
This becomes somewhat easier to conceptualize if instead of saying “Intelligence only arose once on Earth”, we lower the bar. Let’s use roughly Dog like intelligence as a proxy and look at the number of “Dog Intelligent” creatures over time. Also for my counts I’ll be trying to use kind loose biological groupings, so Humans, their extinct relatives, and Chimps would count as 1 occurrence of intelligence arising, not as 50.
If we do human level intelligence it looks like this:
1 billion years ago: None
500 million years ago: None
200 million years ago: None
65 million years ago: None
20 million years ago: None
5 million years ago: None
10,000 years ago: 1, Humans
Now, if we look at Dog like intelligence, which appears to be a necessary stepping stone to human level intelligence, we get this:
1 billion years ago: None
500 million years ago: 1, Cephalopods (it seems Octupuses have gotten more intelligence since then by a lot, although its hard to trace exactly when this happened).
200 million years ago: 1, Cephalopods (some people argue early synapsids such as Dimetradon may have been as smart as Dogs but I find that highly unlikely)
65 million years ago: 2, Cephalopods and some sorts of Therapod dinosaurs
20 million years ago: 6, Cephalopods, Therapod Dinosaurs (New Caledonian Crow), Canines/Carnivora in general, Primates, Cetaceans, and Elephants (and their many relatives that went extinct in the Ice Age extinction, many more seperate lineages than just the mammoth, actually looking at how many Elephant relatives went extinct makes me slightly suspicious they may have been quite a bit smarter than modern Elephants)
5 million years ago: 6, (same)
10,000 years ago: 6, (same, although many within the primate lineage at least are smarter)
If anyone knows any species I left out here please let me know.
It’s a little bit fuzzy telling if say Elephant’s have gotten smarter in the last 20 milion years ago or when exactly Cetaceans “got smart” but it’s roughly over the last 5-20 million years ago. I would say I find it somewhat likely that large group of lineages got smarter over that 5-20 million years though in the aggregate.
So it seems like intelligence may take awhile to work up to, but then over time it’s a subtle evolutionary advantage and becomes more frequent. If humans went extinct tomorrow I wouldn’t be surprised at all if one of the “Dog Like” intelligent creatures reinvented civilization again in under 20 million years. And if they did that they’d also be dealing with more cases of Caledonian Crow like animals (i.e. animals that are pretty intelligent but not sentient). This means that an Earth like planet that got past the “animal life” filter should have a VERY easy time developing intelligence, if you give it say a billion years (many planets have a billion years head start on earth life). To me this seems like a pretty big blow against the rare intelligence hypothesis and so if there’s a great filter, this isn’t the step. This is a minor filter if anything.
I also wouldn’t say the step is “Early Vertebrates” as Cephalopods also seem to be showing an increasing intelligence, although I know less about this and the main proof is that Octopuses (which are more recent) are much smarter than Nautilus (which are more archaic).
I also don’t find the idea that “There were tons of super smart species, much smarter than Elephants, roaming the planet in the Permian and they just didn’t fossilize well” very plausible.
I also read somewhere that (max?) dinosaur brain size increased pretty much linearly until they got hit by a comet.
Probably not all species but sounds plausible as an overall average trend. Like I’d guess some creatures really are less intelligent than their ancestors.
Also some of the smarter therapod dinosaurs lived. And we have at least two lineages, New Caledonian Crows and Grey Parrots that appear to be considerably smarter than their common/ early bird ancestor. Suggesting again that evolving greater intelligence does happen somewhat frequently.
Dinosaur brain size is probably more relevant than, say, elephant brain size too. In birds the number of neurons in the brain increases linearly with brain volume but in mammals that’s only true of primates and a few others.
I read somewhere that if you plot the “cephalization index” (basically, the brain/body ratio corrected for the automatic effects of body size) of all known species over time, most of them have an index of zero, of course. But the upper bound on the cephalization index of species increases quite steadily with time (apparently without hiccups due to the mass extinctions). That suggests “there is always an accessible niche for something a bit smarter than the smartest thing now”.
Do you have a paper citation for this? And did they account for phylogeny?
After all, it’s very easy for a “general trend” to turn out to be driven by just one or a few specialized clades. This happened with turtles on a less-important question – for a long time, statistics showed that sexual size dimorphism increased as size decreased, but later phylogenetically explicit analysis showed it was entirely driven by the map turtles, which are small and have crazy sexual size dimorphism, with no trend for all the others.
Part of the issue though is that intelligence alone isn’t enough. Dolphins are pretty smart but they don’t appear to be on a interstellar culture creating path. Elephants and Octopi have intelligence, the ability to grasp and manipulate things and have millions of years of existence. Are they hitting that path?
I thin that it is between highly likely and obvious that you need the agricultural revolution to get humans to a point where they can maybe contact other civilizations. Some animals simply might be to big, elephants need huge amounts of food and even if they developed proto agriculture it is hard to see their settlement sizes getting large in terms of the number of individuals. If there is a critical mass of individuals you have to hit then it might be impossible for elephants. Likewise there might be a chance that being to small and having a short life span is to limiting. Rats are bright and have millions of years of evolutionary opportunity, but their lifespans might be to short.
I imagine the elephant issue could be solved by having a subspecies gradually shrink and move into more marginal territories that can’t support full-sized elephants. Then come back for their larger cousins when they’ve got tool use and, uh, elephant guns. So to speak.
(but I’m not even close to being a scientist of any sort)
Dwarf Elephants were very much a thing. Usually Island Dwarfism. As far as I know they’re all extinct (human hunting, I believe the last Mammoth population was an example).
https://en.wikipedia.org/wiki/Dwarf_elephant
Like I said, a lot of Elephant relatives got obliterated by humans. It looks like about 50,000 years ago there were something like 20 elephant relative species around. The big groups being Elephants/Mammoths, Stegadons, Mastodons, and the South American Gomphotheres. It seems possible to me one of those groups may have been quite a bit smarter than Elephants, which are already very smart.
Now my personal opinion, and I don’t know studies on African vs. Asian Elephant intelligence, is that African Elephants, the ones who survived, are likely the smartest of them all. Why? They co-evolved with humans. That’s the only reason they survived. They’re already adapted to (primitive) human hunting strategies and know how to survive them.
Who is doing the shrinking?
Remember that elephants evolved to be large, so there is (or at least was) selection pressure in that direction, the combination smaller/smarter elephant has to overcome that pressure.
Which is an indication that smaller elephants aren’t a particularly robust species.
They went extinct en masse at the same time as all the other elephant subspecies. There were approximately ~20 different (and pretty diverse) types of Elephant Relatives at that time and with the ice age extinction (which I think was pretty conclusively caused by Human hunting). The big elephants all died off too i.e. the Mastodon, Stegadon, and Mammoth. Only the Asian and African Elephants survived.
There’s also apparently at least two pygmy subspecies of Elephants that still exist:
African Pygmy Elephant:
https://en.wikipedia.org/wiki/Pygmy_elephant
Asian/Bornean Pygmy Elephant:
https://en.wikipedia.org/wiki/Borneo_elephant
I’m pretty surprised myself that Elephants are so prone to dwarfism. I’m guessing because it’s generally hard to maintain their size in general so dwarfism is a pretty good/frequent adaptation for them.
The Borneo elephant is “slightly smaller than the Asian elephant” according to one source, and has a population of around 15,000.
Quick googling of the other pygmy elephant doesn’t give easy mention of size, but it is a subspecies of the African forest elephant which averages 1-3 tons in weight. Even if it is half sized that is still a large land animal.
Mass extinctions (not caused by displacement of other related species) are generally not a sign of robustness.
The “robustness” in this case is that African Elephants coevolved with the first “Human Level” intelligent species (i.e. Humans). I mean look at everything else that died off in the Ice Age Extinction. It’s pretty clear that without humans a lot of them would still be around/ eventually gone on their own evolutionary paths (which may or may not have led to higher intelligence).
That said the Mammoth and Asian Elephant are very closely related. I have no idea why the Asian Elephant was the one non-African Elephant species that humans didn’t manage to kill off. My only guess is sheer luck or experience with Homo Erectus (although Mammoths had experience with them too).
Disclaimer: Serious risk of hindsight bias follows
The fact that humans wiped out multiple species of elephants across multiple continents in a relatively short period of (geologic and evolutionary) time is a strike against robustness. Very large animals have a tendency to bump up against constraints when change comes, if the pygmy portion of your lineage is 1,000+ lbs then you are going to (potentially) have difficulties. Even the smallest pygmy mammoths were 6-700 lbs, and those were island bound (meaning small populations and general susceptibility to sudden extinction).
Yea, I agree larger animals are more prone to mass extinction in general. If a mass extinction event happens. However, if dwarf mammoths(or a creature like them, i.e. pretty large, dextrous, and smart) had the sort of runaway intelligence explosion that happened in humans ~5 million years ago and didn’t get hit by an asteroid, and were the first to intelligence, I doubt this relative lack of robustness would be a big deal. It’s only a big deal if they’re not the first (if there’d been a meteor 5 million years ago it would’ve wiped out early humans as well as elephants).
Actually come to think of it, humans themselves are a type of megafauna. I’d say they’d have been very prone to mass extinction (roughly as prone as elephants) in general as well. I think the creatures more immune to it are sort of the ones that are more mouse like/actually small.
Like Humans: ~150 lbs
Elephants ~12,000 lbs
Mice/mammals that survived the cretaceous meteor: ~0.03lbs
So elephants weigh 100x humans and humans weigh 5000x mice. Just saying Humans/great apes in general are megafauna too.
My contention would be that animals like elephants aren’t going to hit that stretch of intelligence boosting for several related reasons. First you have the fact that large animals tend to have longer generation times, if it takes 5 million years to go from our common ancestors with chimpanzees to humans now then it might take 10 million years to get the same number of generations of elephant. The second part is that each generation is smaller for large animals, meaning less overall variation in one way or another. A process that takes 5-10 million years to make humans as smart as they are could plausibly take 40-60 million years for elephants, which increases greatly then number of potential extinction events that they have to survive.
A second point is that large animals have a hard time filling behavior niches. The larger you are the harder it is to switch to a different diet, what good would being a part time scavenger do for an elephant compared to early humans or members of the dog and cat families? This (probably) constricts evolutionary pressures, and chokes off a lot of avenues.
But clearly we need more than two categories, right?
Eh, I think two categories is decent for predicting likelihood of mass extinction from Meteors and Supervolcanoes. The little bird and mammal ancestors survived and everything else died off (I’m not sure on the largest land lineage that survived the K-T extinction).
Also I think you might be onto something with the K-selection/long generation time but again Elephants are MUCH more similar to humans here than humans and elephants are to anything else. K-selection/long generation time seems like a rare adaptive strategy that enables higher intelligence rather than prevents it. Also if it just takes twice as long that’s not a big issue in terms of the Fermi Paradox/geologic time.
K-selection is also one of the more common arguments against octopus civilization.
I have a notion that you don’t get out into space unless you have intelligence *and* good manipulators. My impression is that good manipulators (probably but not necessarily hands) are weirdly less common than intelligence.
It’s hard to say, but I would say Elephants (trunks), Cephalopods(tentacles), Carnivora(thinking bear paws more particularly but I’m sure some have even better), and Crows (they can do a lot with their beaks) all seem pretty dextrous and if they were smarter it wouldn’t be much of an issue.
So of the 6 intelligent lineages I mentioned only 1 of them (Cetaceans) seem like they’d have a genuine problem say writing or striking flint together.
Cephalopods might be able to strike flint together, but it’s not likely to be very useful underwater.
Crows might have nimble beaks, but it seems likely that having two hands is a qualitative advantage over just having one.
On the water world idea I agree. However, water worlds, or planets without fossil fuels, or planets with such a high oxygen concentration that you set everything on fire just by lighting a fire are sort of a different category of Fermi Paradox solutions. Here I’m just trying to address the specific idea that “Animal Intelligence is common in the Universe but Human level intelligence isn’t”.
For crows I have to agree that 2 hands is useful but not necessary. As a right handed person I also find I don’t use my left hand for all that much except less dextrous things.
Neat video clip of New Caledonian Crow tool use:
https://www.youtube.com/watch?v=lcvbgq2SSyc
@Cephalopods might be able to strike flint together, but that’s not likely to be very useful underwater-
True. Weaving kelp into nets and fishing lines, cracking small rocks into cutting edges, shaping bone and driftwood into harpoons, picks, shovels, tent pegs, and the odd magic dildo would long predate fire. Written language might long predate fire.
I think that, in the case of crow, their “good manipulation” is simply a subproduct of inteligence (their beaks does not seem to be specially different from the beaks of most other birds)
Note that the two are likely to be independent, or negatively correlated. So if we have multiple examples of both intelligence and dexterity, which suggests that neither is astronomically unlikely, then the combination of the two isn’t that unlikely either, even if we only have one example of the two together.
I find it disheartening that I disproved this “Increasing degree of Intelligence” on the SSC reddit, yet you continue to copy/paste it.
To be blunt – if you pick ANY trait which has evolved convergently, the origin-times of that trait will be biased in a seemingly accelerating rate towards today if you exclude fossil taxa. If all groups of a given taxonomic level has equal probability of arising and going extinct, older lineages will be rarer.
Consider courtship songs – we can’t hear fossilized species, so we see audible courtship songs in Insecta (~400 mya), Teleosts (~310 mya), frogs (~250 mya), modern birds (~90 mya), bats (~50 mya), whales (~50 mya), etc. I’m probably leaving a few out, but look at that trend – most are recent, with an “accelerating rate”. But it’s due to two factors – the recent diversification of two clades who use it a lot (birds and mammals) and the lack of data from extinct species.
All known evolutions of carnivorous plants happened within 8-72 mya. Does that mean there has been some recent huge “push” to carnivory in plants worldwide, and we should all arm ourselves against the rise of an army of Triffids led by Audrey 2? No, it just means that it’s very hard to infer carnivory in plant fossils, that they live in areas not conducive to fossilization, and that their habitats are highly specialized and likely transient on geological timescales, leading to frequent extinction.
Hell, try just picking eight random vertebrate families, and you’ll get the same results. Or “animals that are blue”. Find some published phylogeny with date estimates for any taxonomic group, generate some random numbers to pick tips, and see what I’m talking about.
The structure of evolutionary trees means that if you exclude extinct taxa, the ages of any group of lineages will be biased towards recent times.
I reposted it because I find your disproof decent but not conclusive.
I find it HIGHLY unlikely that there are a bunch of extinct species from before 50 million years ago as smart as Chimps, Dolphins, Elephants, or even Dogs that somehow just didn’t fossilize well. At the very least the frequency would’ve had to have been extremely low. I’m sure 200 million years from now it won’t be hard to find Cetacean or Carnivora fossils.
Though I will grant you it’s possible I’m underestimating Crocodilian/Dimetradon/Monitor Lizard intelligence quite a bit and there may have been some therapod dinosaurs on the level of the New Caledonian crow.
The fact that the branch lengths of random extant tips of a phylogeny will be be biased towards short lengths is indisputable mathematics – everything else is just icing on that cake.
Flip your argument around – given only their fossils, could you tell what extant species are intelligent? Absolute and relative brain size only works in mammals, and even then in confounded by diet, social group size, and phylogenetic ancestry. Birds have small brains but high intelligence due to greater neural density, so we can’t hold this constant, making cranial volume even less reliable across taxonomic levels. Cephalopods and any other invertebrate would be a total wash because it’s all but impossible to infer brain size from fossils. Can you point to a method which would allow you to determine intelligence of all the species you list ONLY from parts which fossilize, and distinguish them from less-intelligent ones? No? Then you have zero basis for your doubts about smart species prior to 50 mya.
Furthermore, even if you could look at a complete, intact braincase and determine intelligence, you’d still have an uphill climb, because that’s not a common fossil. Most actual fossils are more like roadkill that museum front-hall display specimens, usually smashed flash and fragmented by geological or taphonomic processes, usually incomplete. How can you tell me about the IQ of most small fossil mammals when all we have is teeth? Even huge animals like dinosaurs, if you go looking for what parts we actually have, we have complete skulls only for a small proportion, usually super-common species like Edmontosaurus.
Now, consider the null-model – that intelligence arises at random over evolutionary time. Generate some phylogenies with a given divegence/extinction ratio and random appearances of intelligence at a fixed probability, then look at the times intelligence arose in only the extant species – you’ll get exactly the same trend. This is my point – the observed pattern cannot be distinguished from the null hypothesis. If you imagine a second null model, that intelligence evolves more easily with endothermy (because if you’re pissing away 90% of your calories for heat, what’s a few more?), it would also produce strikingly similar results to those you observe, without any need for some “acceleration of intelligence” (remember that, to the best of our knowledge, full endothermy in both mammals and dinosaurs/birds is quite old, near the base of both groups).
Until and unless you can find some way to reliably infer intelligence level from fossils, which works across widely separated groups, and can somehow account for the gaps simply caused by taphonomic and collecting bias, then incorporate that data into a phylogenetically explicit analysis, you have no basis for rejecting the null model(s). You’re postulating a general trend where none exists, and making unsupportable statements about taxa you cannot test.
I was going to post basically the same thing @Cerastes posted in his line of posts, but now I am happy I don’t have to write a long, thought out version that would probably be slightly less compelling.
Thus I’ll just say I agree with 2 main points here:
1. We simply don’t know how smart ancient animals were. Old timey sharks could have been coordinated pack predators, same with all sorts of old animals.
2. Warm blooded creatures (endotherms) might just have an advantage in exploiting such niches when competing against exotherms. Brains need lots of calories, so the “shove all the calories into the mouth” strategy of mammals/birds actually works very well with that. We know that there are cold-water amphibians. We also know that they evolved a looooong time ago. On the other hand, birds evolved into penguins pretty recently. We know there are cold water sharks, they also evolved a long time ago. Cold water whales are common and recent. Its easier for that body type to invade places where you might otherwise freeze to death.
@idontknow131647093
I think you make a decent point which weakens the thesis somewhat but it still holds.
1.) Even if the average Dimetradon or Shark in the Permian were smarter than Dolphins this doesn’t discredit the overall idea that there’s been an increase in animal intelligence since the Cambrian Explosion (i.e. the animal level step). If instead of “Dog Intelligence” we use something like “Crocodile level intelligence” we get this:
1 billion years ago: Nothing
500 million years ago: 1, (Cephaolopds fossil issue kept)
200 million years ago: 3, Cephalopods, Some Synapsids, Some dyapsids
Present Day: 3, I’m pretty sure no other lineages have since emerged.
I’m counting Synapsids and Dyapsids as two seperate lineages here because as best I can tell both therapsids and crocodiles are much smarter than their common ancestor. The number of lineages is smaller to be sure as this is basically a restatement of the “Maybe early vertebrates are a necessary step” problem.
The number of lineages since after the Cambrian explosion though is still increased, thus raising the probability for later intelligence. I’ll also point out that Synapsids and Dyapsids more or less colonized the land/ took up a niche and may have suppressed the evolution of any other smart critters. Sort of like the idea that the first life after abiogenesis may have prevented another abiogenesis event.
Now is it possible there was some fish 400 million years ago, such as Placoderms, that were as smart as crocodiles? Possibly but using what we know about living fish I find that highly unlikely just as I find the idea of dog level therapsids highly unlikely.
I’ll also just say after running that math I’m more convinced it’s the dyapsid/synapsid/lizard intelligence level step that’s the blocker. Lots of synapsids/dyapsids are obviously smarter than their common ancestor yet we don’t have any examples (besides Cephalopods) that are on a synapsid level. This is especially weird as the previous step, “fish intelligence” is pretty widespread. But again, it could just be a first mover takes the market type thing.
2.) Warm bloodedness in this example would just be another “necessary” prerequisite to higher intelligence, and that once you have it the probability for reaching higher intelligence is much higher. So human level intelligence isn’t a random event after a Cambrian explosion like event, rather it has its probability greatly increased by previous steps such as warm bloodedness or getting a very basic brain/neurological structure down.
I’ll give you though that I have no idea if warm bloodedness is a random fluke. But it does seem easier to go(simplified numbers):
lizard creature -> 0.1 chance of warm bloodedness/10 million year -> 0.01 chance of evolving higher intelligence/10 million years
lizard creature -> 0.000000001 chance of evolving higher intelligence/10 million years
Thought about it a little more, remembered the existence of the Anapsids, and realized I was wrong on the number of Amniote lineages.
In fact, early on there were 9 distinct Amniote lineages:
https://en.wikipedia.org/wiki/Amniote
The thing is Synapsids and Diapsids seem to have outcompeted them. In an even that reminds me of both the “Abiogenesis First Mover” event and the “Humans killing/outcompeting every single one of their close relatives (such as say Paranthropus and Neanderthal/ anything closer than Chimp)” Although this situation is a bit more interesting because 2 lineages survived instead of just one.
As far as fish go, it’s also true that once Amniotes arose they almost immediately recolonized the Oceans and so would’ve taken up the “Lizard Intelligence” niche within the Oceans. The Cetacean strategy isn’t anything new, although higher this lineage getting high intelligence seems to be.
On Ichthyosaur intelligence wiki says:
https://en.wikipedia.org/wiki/Ichthyosaur#Social_behaviour_and_intelligence
“Generally, the brain shows the limited size and elongated shape of that of modern cold-blooded reptiles. However, in 1973, McGowan, while studying the natural endocast of a well-preserved specimen, pointed out that the telencephalon was not very small. The visual lobes were large, as could be expected from the eye size. The olfactory lobes were, though not especially large, well-differentiated; the same was true of the cerebellum”
Which is a shame because otherwise it sort of seems like putting amniotes in the Ocean is a fast track to Intelligence as it seemed to be with Cetaceans.
But yea all I’m saying is, as with the probability example earlier, some steps such as endothermy and an amniotic sac seem to dramatically increase the chance of “getting smarter” later on. So it’s not a random very small dice roll since the Cambrian explosion. It’s a series of small but not very small dice rolls, that given enough time seem pretty likely to lead to higher intelligence.
I’d also like to point out that while an amniotic sac may increase the probability of higher intelligence, it is likely not the only event capable of doing so. For instance for Cephalopods “losing shell” seems like a roughly equivalent step. I’ll also add that “Decent Vision” (I’m sure a biologist can better explain how vertebrate eyes and Cephalopod eyes are closer to eachother than to insect eyes, I forget the exact argument) is probably another such step.
There are plenty of Stars that have been around far longer than our own . Within the last 20 years we’ve found that a large share of them have planets within the habitable zone. However, there may be less stars with the sun’s metallicity as in explained elsewhere in the thread. If metallicity is the issue, and given the length of time that intelligence takes to arise if not the low probability, it’s possible we’re the first(in the galaxy or our section of the galaxy that we can observe, not the Universe) but not the last.
@Syx78
I don’t really think the number of lineages that have X level of intelligence is relevant. The number of species might be, more likely it is the number of separate biomes where intelligence exists.
For instance, it would be more likely for Ape-like intelligence to evolve if there were monkeys on all the continents + a few oceanfaring monkeys than if there was a monkey in Africa and a monkey-level intelligence dog in Africa both in the same jungle. In example 2 you have double the “lineages” but the chances are much worse.
Definitely agree and I think that’s sort of what we’re seeing with the amniote lineages. Does there being just two(living) amniote lineages mean it’s unlikely? Well no not really because amniotes are so widespread (and have been since they arose).
Still there being multiple different lineages is important for showing that intelligence can arise in different ways/situations.
Yes, but it’s important not to debunk too much. Worley’s argument does not have this vulnerability.
Gradually increasing levels of intelligence, particularly if it develops independently multiple times, is enough to suggest that the chance of the development of human-level intelligence is not astronomically small, even if the rate of development is not accelerating.
Gosh, people keep assuming that they know what future is going to look like, that “civilization” is a universal concept, that aliens will build spaceships and so on. This is just stretching our parochial human condition over the entire universe. It’s like for a chimp going out and looking for banana trees as a marker of other chimpilizations and, after finding none, coming up with explanations and fancy formulas. Only using chimp-concepts. We don’t know what the future will bring, what it will look like, and how to talk about it: There is No Fermi Paradox
The assumption you are bashing is just another factor in the Drake equation.
How so? I’d say I am questioning the validity of the Drake equation. In particular, we don’t know the mode of life higher intelligence will occupy, we don’t know if concepts like “intelligence”, “civilization” and “communication” will continue to be meaningful. We can’t even be sure that we will recognize it as anything even if it stared us in the face.
If there are a million alien civilisations, a certain fraction of these will retain some basic evolutionary goals, like acquiring resources, checking on potential rivals, etc.
Or to use your words: Given that we don’t know the mode of life higher intelligence will occupy, we have to assign a certain probability to a “mode of life” that includes colonising the galaxy.
You make a fair point, but it is exactly the thing I am questioning. I don’t think you can just add it as another parameter into the Drake equation (namely, percentage of civilizations that will remain civilizations and will remain recognizable to us).
To make an analogy, we might argue if it makes sense to believe in God. And I may say “Well, we can’t be certain, but whatever God is, heaven and hell sure does look unlikely (there will be no recognizable alien artifacts and civilizations)”. And you would than reply “Well, so you say — we don’t know — remember Pascal’s wager, that means you should still go to church (i.e. you should still follow Drake equation)”. I’d say no, because I am doubtful of the validity of the whole enterprise.
It is easily possible that my thinking may be wrong, but from within my thinking it does not make sense to follow Drake equation.
You also have to take into account that the one example we do have of a technological civilisation seems to be pretty likely to go on colonising the galaxy if given the chance.
So while our default assumption should be that there is no god, our default assumption shouldn’t be that technological civilisations will inexorably develop into something incomprehensible to us.
This is exactly what I am arguing for in my article I linked in the first post — our default assumptions about what awaits us in the future should change. Imagine a chimp dreaming up his future ancestor’s (i.e. human) life among endless banana trees. I’d say this is exactly what we are doing with space civilizations and such. Except that it gets even weirder, because chimps can’t dream up future ancestors and civilizations — they don’t have cognitive capacities for that. In the same way we can’t dream up whatever lies ahead. In fact I argue that it is necessarily the case because we are cognitively closed creatures.
If you don’t have an alternative default assumption and good reasons for why this is a better one, then our default assumptions should not change.
“We can’t know what awaits us in the future” is not an argument that supports changing our default assumption. It is just an argument that the probability of the default assumption may be lower than we initially thought.
It’s also the third point I explain doesn’t work here
It doesn’t seem particularly persuasive, to be honest.
Observe that we’re presently incapable of even sending humans to the Moon, despite the fact that we’ve already been there before I was born (and I’m no spring chicken). I find the idea that:
to be optimistic to the point of wishful thinking, given that the benefits of such colonization are tenuous at best.
Space is big, mostly empty, good habitats are few and far between (especially considering that a large part of our functioning as a species is based on Earth’s biosphere; it would be astounding if that wasn’t unique), there’s a hard speed limit on both travel and communications that’s much too low to get anywhere within a reasonable time-frame and all propulsion systems we can presently conceive as workable have the distressing feature of terrible fuel-to-payload ratios.
I love the idea of inter-stellar colonization as much as the next man, but it seems to me that FP argumentation involves way too much handwaving when it comes to the actual problems of interstellar colonization.
It doesn’t have to be impossible, just hard enough for anyone considering it to decide it’s not worth the bother, unless there’s a really appealing star nearby.
Again, looking at our own situation is instructive. There’s at least one good reason to get off-world as soon as possible: catastrophic events on a planetary scale (impactors, supervolcanic eruptions, etc.) We know such an event will happen eventually, we have no good way of preventing it and it will be an existential threat. We’ve had a good start on developing the requisite technology and we’d made the first steps – and then we decided to wrap it all up and go back home. What little space exploration we are doing has to be constantly defended against people who think the money involved would be better spent on ending poverty/curing the sick/preventing climate change/national defence, etc.
Observations with regards to population dynamics show that whilst population explosions are initially problematic, the situation stabilizes in the long run (to the point of declining populations in the developed world). How long does it take for a civilization persisting at a stable, moderate number of individuals to exhaust all the resources of its parent system?
Expanding across the surface of the Earth is easy (relatively speaking). Going into space is very hard and crossing interstellar distances is harder still. Creating any kind of “galaxy-spanning empire” without circumventing currently known physics is the job for an entire civilization over the course of countless generations. If there’s a hiccup along the way, getting back on track may be incredibly hard (evidence from our own space programs).
Gosh, you’re being remarkably condescending in a way that seems calculated to make people not want to seriously engage with you. As if you wanted to ridicule any dissenters into silence, or provoke them into overreaction.
I, nonetheless, dissent.
We are assuming we know what the future might look like, that “civilization” is a possible concept, that aliens could build spaceships, etc. And we note that if the other variables in the Drake Equation take their consensus lively-universe values, there will be literally billions of opportunities for them to do this. If p(starfaring|intelligence) is greater than 0.000000001, the expected results contradict our observation of an apparently-empty universe.
You appear to be making the all too common mistake of noting (correctly) that the probability of an intelligent species colonizing the universe is <<1, and equating that with a significant possibility of every intelligent species leaving the universe uncolonized. The (1-P)^bignum term in the math is very good at transforming modest possibilities into nigh impossibilities, and the key insight is that if P isn’t infinitesimal, bignum can’t be big and is quite likely one.
But you’re trying to compress your keen insight into five sentences, with no math and with a gratuitous comparison of dissenters to chimpanzees, so it’s hard to be sure what you are trying to say and even harder to care.
I am sorry for the manner I put this down in, criticism taken, will correct myself in the future.
As for the argument, my point is that there are very good reasons to suppose that even our future won’t look like anything that we will find comprehensible. To assume that concepts like “civilization” and “colonization” will remain applicable and that future ancestors will share our inner drives is to assume our cognitive universality.
We are biological creatures and our thinking ability stems from complexity of our brains. To assume that we are cognitively open, that we can understand everything there is to understand given enough time and resources, is to assume that no fundamental shift will happen in our perception of the world should we continue to evolve for the next million years or should we transcend our biology and create artificial general intelligence.
In my view this just continues our religious tradition — first we were the center of the universe, then we were at least created in the image of God, now we are at least cognitively universal. As Terry Pratchett said, “We are trying to unravel the Mighty Infinite using a language which was designed to tell one another where the fresh fruit was.” We created this self-made image of ourselves as free universal essential selves and we project this image on the entire universe. Reality is that we are biological creatures, that our thinking works in very particular ways grounded in our evolutionary history (Lakoff’s Philosophy in Flesh, Jackendoff’s User’s Guide to Thought and Meaning), that we do things for reasons but we often don’t understand those reasons (Wilson’s Strangers to Ourselves, Haidt’s Righteous Mind), and that our core picture of ourselves as selves is questionable (Parfit’s Reasons and Persons, Flanagan’s Bodhisattva’s Brian).
When our core biology changes due to evolution or artificial redesign, chimpanzee analogy becomes not gratuitous but apt, in my opinion. For it allows us to use arguably the most important thinking tool in our toolkit, possibly the only one available to us to learn new things (Hopfstadter’s Surfaces and Essences), — analogy and metaphor. To really try to see what our ancestors my look like, may be like, we only need to look at the difference between ourselves and chimpanzees. Our dreams of interstellar civilizations then become nothing more then a chimp’s dream of endless banana tress or whatever it is that chimps dream about. And the problems that actually worry us most, at least those of us lucky enough to have enough time and food, are principally different to those of a chimp. In fact, his neurobiological frame is not sophisticated enough to support understanding of those problems at all. Just as our won’t be as far as our ancestors are concerned.
And here we are only talking 100.000 years. That’s an eye blink on a cosmic scale. What will happen in a million years? Or in a billion?
I was going to say that, yes, I had in fact heard this exact argument last year. I couldn’t recall where, but after a preliminary search, I think it was from the same authors perhaps. I was going to guess that I first heard it in an arxiv paper, however, so I am a little surprised to see it’s seemingly just now on arxiv.
I think there were at least some slides online months ago.
I was able to go back in my browser history and verify that the slides were what I saw (and in fact I found them from lesswrong). The arxiv paper that was itching at my mind seems to have just been my confusion, as I found a paper from around roughly the same part of the year, but about the odds of life around different star types.
They’ve been hinting at and talking about this forever, but I didn’t feel comfortable blogging about it until they’d published the paper.
Yes, it was getting awkward. Basically the paper was done, got rejected by Journal 1, we got too distracted by other things to submit to Journal 2, things begin to leak, we need to refer to the argument in other papers, OK – submit to Journal 2 and put up a preprint… WHAM! Viral. (And a lovely journal club – thanks!)
I really like this approach, but, just to throw an idea out there — would also like to see a second-order error estimate.
In other words, how dependent are their results on the shapes of the probability distributions? (and here obviously the central limit theorem should help; edit: but what if they’re exponential distributions?). And how dependent are they on their specific estimates for the size of the uncertainty? If they changed their assumptions in some plausible way about the distributions of several factors, what would the consequence be?
Maybe in the next paper?
Check out Supplement II, where we try various variations. Basically, the finding is very robust as long as distributions stretch over orders of magnitude.
FWIW, you may be interested to know that Lewis White Beck (a Kant scholar, of all things) made essentially the same points in his presidential address to the American Philosophical Association — memory a bit shaky, but I think it was about 1970 or so.
How far off do two identical earths – with technology permanently stuck at the 2018 level – have to be to not be able to notice each other, ever?
Not too far (he said, waving his hands wildly). But it’s not clear to me why this question is interesting. The only reason I can think of why ours would not progress further is that we destroy ourselves before 2019, i.e. our L is about 100 years.
Edit: I take it back; I find that I would be interested in one of the technical folks giving me a solid number for that, based on signal strength, etc.
On the other hand, “ever” is a long time. After a few million years stuck at 2018, how many Voyagers have we sent out, and how far would they have gone?
Based on this comment ( https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-644815 ) the technology seems to be already within our grasp (it would just take time and resources).
This article ( https://www.airspacemag.com/daily-planet/ultimate-space-telescope-would-use-sun-lens-180962499/ ) states a 550 AU telescope would have 10 km resolution out to 100 light years (though targeted at a single solar system, unless its momentum is changed to have it rotating around the sun).
This article ( https://www.newyorker.com/tech/elements/the-seventy-billion-mile-telescope ) says that such a telescope could be used to amplify certain EM frequencies 1.3 quadrillion times, sufficient to scan nearby galaxies for radio signals.
The error bars in our estimates of most of the Drake equation parameters, with the fat left tails having significant weight down to 10^-9, 10^-12 or further.
Under the fat left tails condition, the easiest way for to get a appropriate low product of the Drake parameters is for exactly one of the parameters to be in the tail, that is, for one parameter to be many orders of magnitude lower than the median estimate. The least likely thing is for all the parameters to be am order of magnitude or so less than the median. (Two factors contributing significantly isn’t TOO unlikely, though.)
The hypothesized parameter that is many orders of magnitude lower than the median estimate is called the Great Filter, and my understanding is the above is basically the received argument for its existence.
This is all consistent with the idea in the paper, the paper just reinforcing that really we can’t be so surprised in a meta sense that (at least) one of the parameters is very low, given our best guesses of the error bars.
That’s great, and it eliminates the need for baroque explanations like the nature preserve idea, but it leaves at least a Fermi Question behind: which parameter is it?
I find all the “In the Past” Great Filters to fall apart under closer examination. They take take, but they also seem sort of inevitable over time in some sense.
My best is on a future filter. I.e. right now, with our current understanding of the Universe, people think the whole K-2 civilization path is the most likely. That is that we slowly build up Dyson spheres and take over the Universe that way. The so-called Dyson Dilemna.
My solution is that there’s some future technology, one that we are unable to predict, that makes the whole K-2 Civilization idea look silly. Maybe it’s being able to go to other Universes/”ascending to a higher plane” . It can’t be something like Warp drive though because that would only make the Dyson Dilemna worse.
I’ve also been thinking it could be “Aliens on Different Timescales”. Like if Aliens are moving incredibly slow or incredibly fast (not in terms of actual speed, more in terms of processing time) then they could be very hard to detect.
I was getting ready to introduce another way to “dissolve” the Fermi paradox/future filter/god king instruction manual I came up with a few weeks ago, but luckily a thread at the subreddit made it onto the first page of a Google search checking for originality. Having an “original” idea that is ONLY ten years out of date is probably a personal best for me!
Civilizational Quantum Suicide
https://www.quickanddirtytips.com/education/grammar/it-dilemma-or-dilemna
Processing time and other speed issues would ultimately come down to metabolism and mechanism of processing (e.g. chemical, such as our neurons). Even plants are relatively fast with their chemical processing (think fly traps or sensitive plants [Mimosa pudica]). So anything deviating too far from this norm wouldn’t be biology/life as we know it, and we wouldn’t have even a basis to guess at its probability. Though this comment ( https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-644659 ) guesses that it would be highly unlikely.
The above shows that the Great Filter can be one of the previously considered parameters (which was a already common claim)
The paradox was that this didn’t make sense because scientists supposedly thought it wasn’t any of them.
The origin of the paradox was that people were summing up what scientists/they thought wrong.
While our existence proves nothing, I wonder if we can extract information from the timeline of events?
Breaking down Drake’s equation’s parameters a bit, we know that getting from a molten planet to life (leaving traces we can detect) probably took less than 1B years. Another 1B years, we may have had eukaryotes. First multicellular life forms another 1B. Animals with some intelligence another 1B years. And us, civlization and all, some hundred M years after that, on a planet 4.5 billion years old.
Obvously very coarse approximations, from cursory readings of Wikipedia and other sources.
Did we get incredibly lucky with one of these? To me, getting the right combination of lipids, amino acids, nucleotides to bootstrap life seems like a wild chance – and although theories abound, we don’t really know how the process came about. Currently there’s much hand waving about lipid bubbles in stagnant geothermal pools, no plausible chain of events.
Once you have cellular life, mutations and natural selection, things would start to develop, and intelligence seems more likely. It seems encouraging that we have relatively smart creatures among birds, mammals, and octopus – very different branches of the evolutionary tree. Intelligence also seems to vary quite a bit between related species, meaning the potential is there. But none besides ourselves seem to even approach human intelligence, and I’m not aware of any evidence of past species thought to have been particularly bright – brains the size of walnuts seem to be the rule, rather than the exception. So perhaps going from typical mammalian intelligence to human level intelligence (or to civilization) is more difficult than I would expect?
“To me, getting the right combination of lipids, amino acids, nucleotides to bootstrap life seems like a wild chance ”
I agree, of any of the “Filter is Behind us” arguments I find the super early filter the most plausible at this point (I’m not super set on it/ could be easily swayed). My reasoning is that while we can pretty much duplicate any other step in the lab, or at least could using current techniques over long enough timescales, we can’t do it for abiogenesis yet. In fact people aren’t even really set on what form of abiogenesis happened (Like did RNA world really happen?) At least we can’t entirely replicate abiogenesis in the lab yet, I know some experiments did show some promise. Whereas for Eukaryotes we really can just start making an algae cell absorb more other cells/ become more eukaryoticish.
That is a really interesting point. Abiogenesis happened almost as soon as it possibly could on this planet given the conditions, and yet we can’t make anything like it happen in a lab. The thing that was the “easiest” for Mother Nature is the hardest for us.
Mother Nature’s lab was the size of the earth, and she got quite a bit of time from a human’s perspective to muck about with it.
But the other stuff that took Mother Nature’s way bigger lab way longer we can get a pretty good approximation of with small labs and modest research grants.
I think there is a much simpler solution to the Fermi paradox following the Anthropic principle: Assuming space travel was easy and intelligent life common enough such that different civilizations could see one another and that paperclippers are an inevitable consequence of intelligent life, then the first paperclipper could easily spread throughout space (e.g. via radio transmission or Von Neumann probes) and use up all negentropy, which would mean that a planet surface like ours could at most exist a few times. Our planet surface is thus strong evidence that life is at least as rare and space travel at least as hard that it is very unlikely for paperclippers to spread throughout space.
Isn’t that exactly backwards? What they have shown is precisely that the lack of observation of aliens does require a sci-fi theory to explain it.
The theory in question could be pretty hard sf, like different genetic codes, or it could be _alien zoo_. But it cannot be known theory plus random chance, any more than the distribution of mass in galaxies can be.
I was also very impressed by that paper and thought it actually lived up to its claim of dissolving the paradox. One thing that now annoys me about the way the Drake Equation is formulated is that it makes it seem like the two interesting events in the development of our civilization on Earth were life arising and life becoming intelligent. But really life arose almost as fast as liquid water did. And it took ages and ages for photosynthesis, then eukaryotic cells, then multicellular life to form. If life had developed photosynthesis more slowly we would have been without an ozone layer for longer and might have lost too much of the atmosphere for complex life. And as it is we’ve used four fifths of the time from when life arose until the sun gets too bright for us to get here, meaning life might very well not have gotten to the point of complex animals. Blog post with graphical timeline.
So what are we looking at then, loads of planets covered in algae and maybe a couple of intelligent species per galaxy who will likely never, ever meet up?
As one of those people who was aware of this point but didn’t publish it in an academic paper, I am surprised by the reaction to this paper. I was interpreting the Fermi paradox not as a paradox per se but as the dilemma of determining which of the density-reducing filters we are aware was significant for producing the observed low density of life. Illusion of transparency I guess. I should get in the habit of writing up more of my thoughts.
Speaking of which, this reasoning can be taken further: Consider multiple radically different mechanisms by which signal-producing life could evolve. By the same reasoning as this paper, we should expect the densities of successful enactments of each mechanism to be roughly independent samples of a log-normal distribution. Then I expect the most likely mechanism to be orders of magnitudes more likely than all the rest. Therefore, assuming our civilization eventually becomes signal-producing, I expect most other signal-producing life to have formed through roughly the same mechanism and a similar environment. See my blog post for more details.
You are exactly correct in your interpretation of the “paradox”. Scott Alexander is being rather credulous by taking the paper seriously.
No, the lesson is that you should use bombastic titles.
Also, it’s not clear that this reaction is one you should desire. That lots of people have false beliefs about the contents of your paper is not good for propagating your innovations.
This is a nice paper. It formalizes the obvious solution of Fermi’s paradox: since the product is low, one of the mutiplicated values has to be low as well. I agree that it’s surprising that no-one seems to have done a qualitative analysis of this observation before this paper.
One thing to remember is that we need to keep improving the constraints for the variables involved, and keep updating the calculations like those in the paper. For instance once we find or fail to find any evidence of ancient life on Mars, we’ll improve either lower or upper bound for the probability of abiogenesis. This in turn will update the probability estimation in the paper.
>since the product is low, one of the mutiplicated values has to be low as well
Isn’t ‘one of the values is low’ the Great Filter?
The other option is ‘All the values are just small enough that they multiply to a really small number, but none is particularly small’.
Speaking of Drexler, it has gotten awfully quiet around molecular nanotechnology. Is that just the hype dying down and people being able to work in peace, or did we really resign ourselves to better sunscreen? I see that foresight.org is rather active, but it’s all information and policy events. What about Freitas and Merkle, for example, are they still actively researching? Are there any recent theoretical or experimental “oh wow!” results in connection with, say, a nanofactory?
Re: mutation rate, I’m not sure the “evolutionary rate so low” argument holds water.
There appears to be some kind of optimal mutation rate, if you expose microorganisms to mutagens: radiation, chemicals etc then you get a higher rate of mutations at first but eventually it tends to settle back to about the same level as usual. As my genetics tutor put it there’s appears to be a “just right” range of mutation rates and if you push the mutation rate above that then sooner or later some of the mutations will select for changes that improve fidelity.
The DNA repair mechanisms in eukaryotic cells appear to be capable of scaling up extremely significantly but don’t tend to be close to the limit at all.
Biology of Extreme Radiation Resistance: The Way of Deinococcus radiodurans
Likely because those individuals with too-low mutation rate eventually see their offspring weeded out of the gene pool.
If mutation rate is too slow vs optimum then one of the early mutations strongly selected for will be mutations that increase the mutation rate.
bacteria even appear to increase their own mutation rate in response to environmental stress.
Stress-Induced Mutagenesis in Bacteria
You can also get stress induced genomic change in some varieties of flax, where the change is predictable, and can either revert of get locked in (and is heritable).
Re: Evolutionary rate
Evolution also seems to keep retrying many of the same tactics ( https://www.quantamagazine.org/can-scientists-predict-the-future-of-evolution-20140717/ ); it’s the intelligent bipedal ape or pond-skipping lizard that are the anomalies.
Though I have no idea how much recapitulation is pre-existing genetic possibilities and how much is novel mutation.
Amusingly, I know enough math I should have thought of this point as well. It seems to me that the core of un-intuitiveness is that the Drake equation is multiplicative, and the distribution of a multiplicative outcome is log-normal rather than normal. That doesn’t make so much difference when the factors have low uncertainty, but the proportional uncertainties of the factors multiply, so if the factors have high uncertainty, the intuition that the result will almost always fall within a small range fails miserably.
I’m pretty sure that the answer to the Fermi paradox is that interstellar travel is really, really hard. Even if a great many civilizations exist and have developed technology to our level, it wouldn’t surprise me if nobody had ever managed it.
For background information, see here: https://www.antipope.org/charlie/blog-static/2007/06/the-high-frontier-redux.html
This post and the cited paper are responding to a straw-man popular version of a paradox that does not exist in the academic community! It is like today writing a paper resolving the “twin paradox” in special relativity. We call it a “paradox” because it is catchy, not because it is actually a paradox. Everybody knows that the uncertainties on several terms in the Drake equation are large enough to make an “averaging” procedure meaningless. This is why the academic community is not very concerned with the “paradox”, because long ago they came to the same conclusion as the paper discussed.
You’ve posted this same claim here three times now, always with the same personal attack on me. Please get some evidence before you post it a fourth.
I wrote in the post: “Maybe people thought of it before, but didn’t publish it, or published it somewhere I don’t know about? Maybe people intuitively figured out what was up (one of the parameters of the Drake Equation must be much lower than our estimate) but stopped there and didn’t bother explaining the formal probability argument. Maybe nobody took the Drake Equation seriously anyway, and it’s just used as a starting point to discuss the probability of life forming? But any explanation of the “oh, everyone knew this in some sense already” sort has to deal with that a lot of very smart and well-credentialled experts treated the Fermi Paradox very seriously and came up with all sorts of weird explanations.”
You haven’t made any attempt to contradict that. If you want to, start by explaining:
Why NASA astrobiologists are saying that ” there should still be gazillions of earthlike worlds out there” so the Fermi paradox “still holds”, and coming up with crazy solutions.
Or why the Hart-Tipler argument exists if everyone of the rank of Hart and Tipler should already know the Fermi Paradox is solved and boring?
Or why Stephen Hawking, Brian Cox, Carl Sagan, etc are all on record taking the paradox seriously, and never mention this line of argument?
Or why you can find hundreds of papers in journals like International Journal Of Astrobiology describing the Fermi Paradox and proposing solutions, and none of them mention this?
Or why (described downthread) Milan Ćirković, a PhD with many publications in astronomy and astrobiology journals, recently published a book on the Fermi Paradox concluding it “remains completely and irritatingly unsolved”?
What do you think the people you mention mean by “the Fermi paradox” and what position should this paper convince them to hold instead?
What do you think is “the Hart-Tipler argument”?
Doesn’t this paper just agree with them?
Maybe Drake and Sagan made errors in the 60s that are corrected by this paper. But they did observe that the equation is volatile by having different scientists do different point estimates. Indeed, this paper describes itself as a synthetic version of that!
What Hart argued was that SETI is implausible, that anyone powerful enough for long enough to see would have eaten the universe. That is largely orthogonal to this paper.
After Hart, people slowly settled on the dilemma that either a civilization like ourselves is very unlikely or that a civilization like ours is not likely to last long. As far as I know, that is what people mean by “the Fermi paradox” in this century. And when they say it is important, they mean that it is important to know which horn is correct. This paper does not choose between them, so it does not resolve the paradox. It does not make the question any less important.
The abstract of this paper asserts “The Fermi paradox is the conflict between an expectation of a high ex ante probability of intelligent life elsewhere in the universe and the apparently lifeless universe we in fact observe.” I think that is just not what people mean by the phrase. When the people you mention say that “the Fermi paradox” is important, that is not what they mean. Anyhow, they “resolve” this conflict not by their method of calculation, but by saying that it isn’t obvious that life is common. And they do this merely by observing that some people think that life is rare. It is reasonable to ask why people are so certain that life is common, when others think it is rare. But it is not clear that anyone is certain that it is common. Of course, if people used distributions, that would be clear.
….
OK, here is Brian Cox confidently asserting that life is common, but when someone pushes back, he changes his story from observable universe to infinite universe. And when someone else pushes back on the difficulty of life, he seems to concede that they have a good point, but it’s not clear what he concludes. It’s not clear that he ever had an argument or a belief. But drawing attention to disagreement seems good for him. Surveying the literature and reminding him that people have a wide range of opinions is good. But it’s not a probabilistic technique doing the work.
Maybe SDO improve on their predecessors by ruling out his specific hypothesis of life in the observable universe, but not in the galaxy, but that’s a pretty fine-grained refinement. He shouldn’t have been so specific in the first place.
Hmm. It sounds pretty close to what I mean. What do you think they mean?
As I said, it seems to me that what people in the 21st century mean is the dilemma that either the filter is before us or after us; that it’s not clear which. (And more fine-grained questions, such as abiogenesis vs multicellular life, etc)
Maybe the people who coined the phrase in the late 70s meant that they or the SETI people before them thought that life was abundant and thus weren’t happy with the arguments that, say, Hart made. But once the phrase was coined, it was used widely, even by people who thought that life was plausibly rare. For example, here is a 1988 article which seems to identify the Fermi paradox with the Fermi’s original question and doesn’t treat it as a paradox.
I must still not understand your distinction. The paper you cite says
which sounds to me like what I said.
The classical Fermi Paradox is: We expect lots of aliens, but there are no aliens. The Hart-Tipler argument doesn’t dissolve Fermi’s Paradox, because it doesn’t show a flaw in the first half; it just shores up the truth of the second half. Without it, you can posit things like, “Well, maybe their communication gets more point-to-point, or uses some transmission medium our science hasn’t discovered yet, and they’re not interested in talking to primitives like us.” Hart-Tipler shows that that’s the God of the Gaps — if there were aliens, we would not be around to look for them.
The SDO’s contribution is showing the flaw in the first half.
Sure, the article starts with those sentences, but then it talks about all the different solutions, including from people who don’t expect lots of aliens. It treats a “solution” as an answer to what’s true, not a resolution of tension.
So you agree with me that Hart-Tipler is orthogonal to the Fermi paradox and that Scott is very confused when he mentions it?
I can see what you mean. But another way to read such an article is as a suggested correction to some input assumption to the Drake Equation, in this case that the existence of a moon big enough to count as a double planet made fl, abiogenesis, much more likely for us than for other Goldilocks planets (though they also go off on a little bit of a tangent about oxygen, which fits more into fi, development of intelligent life).
Well, I might be more circumspect and leave out “very”. But yeah.
I agree with Douglas Knight that the mere fact that you can point to repeated mentions of “Fermi Paradox” in the literature is vacuous, because you are using a misaligned interpretation of what “Fermi Paradox” means. When any physicist I know says “Fermi Paradox is unsolved,” they mean “we don’t know which term(s) in the Drake equation are very close to zero.”
Regarding your specific examples. The first quote you give is a simple misunderstanding of a NASA scientist’s press statement (which I wouldn’t put too much weight in in the first place):
“We already know enough about exoplanets to say that even if earthlike worlds are not the dominant habitable world in the galaxy, there should still be gazillions of earthlike worlds out there. The Fermi Paradox still holds for that set,” he says.”
He is rightly saying that our current understanding of the “earthlike worlds” terms in the Drake equation suggest that there should be gazillions of earthlike worlds. That is completely correct. And he is also completely correct to say “The Fermi Paradox still holds for that set,” given the definition of “Fermi Paradox” I gave above.
Regarding the existence of the “Hart-Tipler argument” (in other words the “Fermi Paradox”…), there should be nothing surprising about a name being given to a non-paradox. This is incredibly common. I previously mentioned “Twin Paradox” as a similar example, but see wikipedia’s list for an overwhelming exemplification of the point: https://en.wikipedia.org/wiki/List_of_paradoxes
Regarding why Stephen Hawking, Brian Cox, Carl Sagan, etc are all on record taking the paradox seriously, again, this is all consistent with the more correct definition of “Fermi Paradox” I gave above.
Regarding hundreds of papers in journals like International Journal Of Astrobiology describing the Fermi Paradox and proposing solutions, again, same comment. Note that it is also not inconsistent to search for reasons why there *may* be extraterrestrial life that we can’t see. The very paper you are hyping makes this clear: given our priors, the probability of seeing extraterrestrial life is neither particularly high or low, so it makes sense that you would find all kinds of papers hypothesizing on either side (that ET exists and is unseen or does not exist), and appropriately referencing the most common term associated with such questions, namely “Fermi Paradox,” just as a paper discussing relativity might mention “Ehrenfest’s Paradox,” and may even discuss some subtle counterarguments in thought experiments that are not fully understood or agreed on, despite more generally not acknowledging that there are grave problems with relativity theory and that there exists a true “paradox.”
Regarding the Milan Ćirković book (it’s *mentioned* downthread, but not more), I can’t really comment without having access to the book or the wider context of the quote.
My response to the Fermi Paradox would simply be that life might not be able to get that much more advanced than where we already are now. I doubt an alien civilization with our technology could find us, unless they were extremely close (like on Mars).
We have this “exponential growth” model for technological progress that denies the physical limitations of our universe. Are we supposed to get so complex and advanced that we can send Morse code messages by exploding stars on cue? That’s something other civilizations could see, perhaps, thousands of years later, but the resources required to pull off a stunt like that seem insane.
I mean, why couldn’t there be millions of other earth like civilizations with our same or even slightly better technology, sending out probes very similar to Voyager 1 and Voyager 2, and those probes just haven’t bumped into each other? I mean, the odds of a collision between Earth 1’s Voyager Probe and Earth 17’s Voyager Probe seem insanely small.
In addition to this, the fermi paradox seems to assume that civilizations generally expand an use more and more energy until their effect on the galaxy can’t be hidden, that their either destroy themselves or expand forever. It seems likely that a much more common trajectory would be to just slowly fizzle out. Our population will almost certainly begin declining in the next century. And that’s not a doomsday prediction; we’re just not having as many children any more. I don’t see any reason that trend would turn around just because we developed technology that allowed us to travel to the stars. In ten thousand years that might just be a few million extremely long lived humans noodling around the galaxy, or maybe hooked into some virtual reality experience. And maybe that’s the standard progression for civilizations. Sure, we might be able to tell if there were several galaxy spanning civilizations out there collapsing stars for energy. But we’d never noticed a few million civilizations consisting of a small number of ancient individuals playing their version of WOW all day long.
But to explain the observed results, “much more common” isn’t nearly enough. That trajectory has to be nigh-universal, p>0.999999999, and that’s going to be a tough argument to sell.
Singularity is likely, and at least definitely not astronomically unlikely in my opinion. At that point, you can explore the universe out of pure scientific curiosity with autonomous probes. Interstellar travel is definitely not outside physical limits; in fact it’s likely to be achievable with close-to-current technology. With self-replicating robots it’s most likely possible to cause astronomical-size signals, but just visiting all planets is sufficient.
Population growth doesn’t slow uniformly across a civilization, and therefore the expected result of demographic transition is just that over the next few centuries, the highest-growth subpopulations expand until they’re the civilization-defining population.
It’d be very difficult to actually make the total population consistently shrink.
+1
If there are other advanced lifeforms in our galaxy, it’s extremely unlikely that their level of technology is the same as ours. There is no reason to assume that our tech level is the highest it can ever be, and even if you extrapolate a mere 100 years into our future, which is nothing on the cosmological scale, the capability to detect exoplanets and exobiology will vastly improve, if only by scale, i.e. putting larger telescopes farther away from Earth. The only argument against much further advancement could be the Great Filter, if the filter is that life wipes itself out soon after it gains the ability to do so (global thermo-nuclear war, catastrophic anthropogenic climate change, etc.)
Please see this comment where I cite another comment, and corresponding articles, explaining why this is wrong: https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-644903
Given 100 years or so to put the telescope/projector in place to use the gravitational lensing of out own sun, we could communicate to a radio receiving civilization out to at least 100 light years, and could likely pick up radio communications from nearby galaxies.
I’ve said this elsewhere, but I don’t think there was a paradox to begin with, and I don’t think this gets us further in understanding the non-paradox.
The terms in the Drake equation have some actual (but unknown) values. If you were a super-being, you could calculate the real answer. Based on observational data, it seems likely that this will be 1, give or take a few orders of magnitude (maybe it’s a fluke that intelligent life exists at all, maybe there are some other civilisations we’ve not spotted).
The “paradox” is that when you make your best guess for the terms, you come up with a much larger number, but you know that your best guess isn’t very good, so (based on the observational data) you must be guessing one or more terms too high. Running the calculation with distributions just formalises the intuition that your best guess isn’t very good.
The whole “great filter” line of thought works by seeking to identify which of the terms in the Drake equation is small, although (as I’ve also said before) this seems to overlook the possibility that there are several fairly small terms, rather than one tiny one.
Put it this way: the only sensible range you can give for the probability of abiogenesis is (0,1], and that alone is going to allowing the product to range down to zero.
At around the same time as Sandberg, Drexler and Ord published their paper on the Fermi paradox, Milan Ćirković published a book entitled The Great Silence: The Science and Philosophy of Fermi’s Paradox. But while the paper claims to have “dissolve[d] the Fermi paradox”, the book states that the paradox “remains completely and irritatingly unsolved.” Remarkably, all these four people are researchers at the Future of Humanity Institute.
My guess is that the dissolution by Sandberg, Drexler and Ord didn’t reach Ćirković in time to be included in the book.
Mmm, my main complaint about this paper was that I thought choosing log-normal/log-uniform distributions kind of loaded the die, by putting a lot of weight on the lower possibilities, but I had not seen supplement 2, where they go with uniform distributions and get more optimistic, but still shockingly high values for being alone.
:/
My take on this is that either we are alone or that there’s simply no super technology, meaning that Von Neumann probes that can tile the galaxy are just not possible, let alone any kind of FTL travel.
Von Neumann probes don’t require any super technology; the question is just how cheap and small you can get them. (At ‘current tech levels’ it would take us a long time to build and send them, but not a long time in galactic time.)
At the current speeds we can move stuff right now, they’d still malfunction by the time they get to another star.
Making a robot that can work reliably for a few thousand years is still in the realm of super technology.
I mean, we are talking about a robot that can build two copies of itself from base materials found in planets and asteroids. That’s highly non-trivial. And then send those two robots on other trips to the next systems.
It is possible that building reliable Von Neumann probes is so hard that aliens only make them from the most basic materials available, and thus we didn’t notice them fucking around on our asteroid belt, because that was the only thing easy enough to build that would survive the trip.
Any interstellar travel involves accelerating to a reasonable fraction of the speed of light. Since the speed of light is about Mach 900,000 (not a typo, compared to Earth sea level speed of sound), we’re looking to need about 4 orders of magnitude more velocity than anything we’ve done before. That’s 8 orders of magnitude more energy; a tiny interstellar craft requires energy expenditures comparable to nuclear weapons both to start and to stop and assuming perfect efficiency. Plus the law of conservation of momentum inevitably requires reaction mass, so you have to accelerate the fuel. Hell, you need about seven magic wands just to keep the damn thing from melting while you accelerate it.
Then you need to take it across 30 million million miles (assuming you’re going to alpha centauri). Hitting even the tiniest bit of gravel or snowflake, at .3c, will be like hitting – honestly, no simile can do it justice, so just think of the worst anti-tank round impact ever and it will be much worse.
And you plan to do this to a factory that can reproduce itself from unknown raw materials at unknown temperatures in unknown gravity and atmosphere. So it basically has manufacturing capability more sophisticated than all of modern Shanghai, under exceedingly tight weight limits (each pound added is an unknown-but-huge amount of extra fuel for acceleration and deceleration, call it millions-to-one for a conservative estimate).
If that’s not super technology, I don’t know what is.
Obviously, all of these estimates are very vague. We’re not talking about a specific proposal here (everyone who tries to make a specific proposal does this math and gives up). But none of the above is within a few orders of magnitude of being workable; I could be off by a factor of 100 somewhere and it wouldn’t change the conclusion at all.
I have a more life-optimistic refutation of the Fermi paradox.
The standard reasoning assumes galactical civilizations are non territorial. Anyone can travel anywhere, if they have the technical means. But the experience on Earth is that all territory is governed, and the government of a region decides who gets to travel there.
So a very likely explanation for why no one is coming here is that the local government doesn’t want it. There may well be thousands of civilizations and empires in our galaxy, but the only one that matters is the one controlling our solar system, and it happens to like it this way.
Thoughts?
On Earth, a government only controls territory if it can effectively project force there. Otherwise, you get settlers, raiders, or filibusterers coming in regardless of their nominal claim. Without faster-than-light communication, it’ll take decades or centuries to project force over interstellar distances – so you’d need to assume an unusually stable galactic political environmment or unusually hidden starbases in every solar system for that idea to hold.
Possible insight: The Drake equation as written predicts the number of “detectable” civilizations in the galaxy (and can be trivially extended to the visible universe). But if we define “detectable” as, detectable by current SETI programs, we know N=0 and can perhaps use the principle of mediocrity to set N(expected)=0.5
In order to know the number of civilizations that currently(*) exist, we need to break down the ‘L’ term, defined as the the length of time over which such civilizations release detectable signals, into Le and fs, where Le is the length of time that the civilization exists and fs is the fraction of the average civilization’s lifespan that is spent emitting detectable signals. Or for static civilizations, the probability of any given civilization settling into a signal-emitting state. Then solve for N~0.5, divide by fs, and estimate the number of civilizations that currently exist but that we haven’t heard of.
Many proposed resolutions poke at one or the other of those elements, e.g. maybe they all nuke themselves into oblivion or run off into a Dark Forest, but Drake’s math lumping two distinct behaviors into one term confuses the issue. And both terms will have large error bars, thanks to our trying to extrapolate from a sample size of one, where if either one of them is close to zero we get the observed universe.
* For the relativistically appropriate definition of “current”
It seems to me that the f_i term is overloaded, and should be something the product of f_mc (the fraction of life bearing planets which evolve multi-cellular (or similar) organisms), f_in (the fraction of planets with multi-cellular organisms which evolve intelligence) and f_civ (the fraction of planets with intelligent organisms which develop civilisations).
There are a fair number of such more fine-grained versions of the Drake equation in the literature. But the basic conclusion of our paper still applies (in fact, since the new terms each have their own uncertainties things tend to get even more uncertain rather than less).
Please tell me they didn’t rename monte carlo a “synthetic point estimate”.
The true cause is clear: Trump’s policies are working. Personally, I’m glad we don’t have illegal space aliens here in this neighborhood, turning everything into computronium.
Another way to at this calculation is it’s what you’d expect from any reasonable prior probability distribution of the Drake parameters, after an update on the observation that we appear to be alone in the galaxy. That update involves fattening up the left error bars, and the result (that the posterior probability of the observation given the distributions isn’t vanishingly small) is a way of checking that we made an update that’s approximately big enough.
I’m sure that we implicitly take the fact that we appear to be alone into account when we judge the size of the error bars. On the other hand, we knew there were thousands of (biologically independent) civilizations in the galaxy, we wouldn’t much credit the possibility that the chance of life arising on a given habitable planet is 10^-12.
I don’t get why people seem to think that the probability of abiogenesis is not astronomically low. AFAIK we still don’t have much idea how the first self-replicating organism came into being, and it’s very will possible that it requires an enormous amount of coincidences. The bacteria with the smallest genome have hundreds of thousands of base pairs. If any of that is even slightly different, it wouldn’t work. (Well, small mutations sometimes still make it viable, but a lot of information needs to stay intact.) It’s thought that in the first organisms RNA filled the role of both nucleic acids and proteins; the first self-replicating structure was likely simpler than the simplest extant bacteria, perhaps a single molecule. But still, it had to come together from the building blocks “all at once”. It’s quite possible that no natural process makes a self-replicating combination more likely than any other combination of building blocks (e.g. nucleotides). The probability of a randomly developing structure a self-replicating may well be 1 in 10 to the hundreds or thousands.
Once there is life, the evolution of increasingly complex life forms seems to have been relatively straightforward. Sure, multicellular life took 2+ billion year. But note that once you have self-replicating organisms, there are lots of chances to evolve more complexity: if one path is a dead end, life is still around. The chance of developing complex and intelligent life might be low in the everyday sense, but I see no reason to think it’s astronomically low.
Some people have argued that the relatively early emergence of life on Earth suggests that abiogenesis is likely. The two earliest evidences of life (based on wikipedia) are from 4280 to 3700 MYa and 4100 MYa respectively, in other words 120-630 and 300 million years after ocean formation (4400 MYa), or at 2.6–14.3% and 6.7% of the ocean’s current age after ocean formation. (I don’t know how certain these evidences are; evidences from 3500 MYa seem to be certain.) That is, the chance of abiogenesis this early is at least 2.6–6.7% assuming that abiogenesis is unlikely enough to occur only once at the current age of the universe. (Probably more due to observer effects, and perhaps abiogenesis is more likely on young planets.) IMO that’s not small enough to dismiss the rarity of abiogenesis as an explanation, particularly if we take priors into account, which IMO should be much lower on abiogenesis than any of the other variables in the Drake equation. I don’t think there is a reason to think that any of the other variables in the equation being astronomically small is a better explanation of the Fermi paradox than that a single event of 2.6–6.7% chance (or higher) happened.
Note that the SDO paper the key to getting a significant probability of an empty universe is the very low probability of abiogenesis at the lower end of a (very conservative) estimate with an uncertainty of 200 orders of magnitude, trumping everything else. The paper is a good addition to the topic, but (IIUC) it depends on abiogenesis for the explanation – while a low probability of abiogenesis explains the Fermi paradox alone. Btw their estimate range for the probability of intelligent life evolving once there’s life is 0.001 to 1.
Want to point out that “astronomically low” only means “low” when we are talking about astronomy. Tho a good point is that life probably needs a second generation star, for the elements higher in the periodic table.
I suspect that super early fossils mean that abiogenesis happened way earlier, since self replicating aminoacids would not leave microfossils. Either the period from primordial soup to living beings that can leave fossils is super short, or the aminoacid / whatever primordial soup period started as soon as we had oceans.
@naturatahilaritatum
read the Supplement II. It has updates for finding life in Europa, finding a “dark biosphere” on earth, and other scenarios, and they raise our chances of not being alone a lot.
Oceans to primordial soup may easily have been pretty quick. There are two other intervals we have to consider: primordial soup to earliest life (which I hypothesize to be very rare), and earliest life to life that leaves detectable fossils. (Also, first detectable fossils to first fossils to actually have been found.) I don’t know how long it would take for self-replicating structures to evolve into something detectable – since AFAIK cell-like structures arise pretty easily even in lab experiments, perhaps not too much:
Notes: The 4100 million year old evidence I referred to is just certain materials that were probably produced by living organisms, not actual cell fossils. An amino-acid alone certainly wouldn’t self-replicate, it would need proteins (i.e. amino-acid polymers), or more likely RNA (nucleotid polymers) or similar.
I think that the earliest, for sure fossil is 3.5By old, and the rest is just ‘strong evidence’ of life, but it is still relatively fast, compared to planetary formation and intelligent life arising, and even short compared to the longest estimates for civilization life.
I find fascinating the idea that, assuming a non-rare Earth, the hard part is not Fl, but Fi, given how on earth life appeared in, at most, one billion years, but intelligent life took 4 times that.
It would mean SDO may even be optimistic, given their estimates for Fi goes from 0.001 to 1, but that process was 4-16 x times longer than abiogenesis, and thus had 4-16 more chances of going wrong at any point (due to any astronomic catastrophic event, or any other reason). Hell, there’s even time for abiogenesis to have happened multiple times on Earth, and only one of the life strains to have survived into us.
Once procaryotes developed, things would’ve had to go very wrong to undo that. Extremophyle bacteria can survive a lot. They are thought to have survived periods of snowball Earth. Things could’ve gone that wrong, but it’s not very likely – definitely not overwhelmingly likely.
@10240
Yeeeah, but there’s still a huge chasm between life-that-can-leave-fossils and intelligent-life, which is what Fi measures.
Abiogenesis could have happened in Mars, Venus or Europa, but could have been killed by the climate -dunno if to the prokaryote level, fair-.
Or just capped forever into non-intelligent life. Maybe there’s extremophile life in Venus? Maybe Ne * Fl was 4 in our system, but Fi actually cut the end result to 1.
Sure, I don’t think f_i is close to 1, I just don’t think it’s vanishingly small. For example, according to some predictions, Earth will be too hot for life in a billion years. If evolution of intelligent life took just a billion years longer, it would’ve been too long. This may cut it short a lot of the time, but I don’t think it’s overwhelmingly likely.
I agree that arbitrarily low values of the probability of abiogenesis cannot be ruled out on present information, and this alone is sufficient to dismiss the “paradox”. My guess is that the probability is not tiny, based on (a) the apparently short period between ocean formation and abiogenesis and (b) the fact that we know it’s chemically possible, and the apparently huge number of opportunities for it to happen (considering the volume of the oceans and the timescales involved). But neither of those are decisive. The short period could have been a total fluke. It may be that abiogensis can actually only occur in some specific circumstances (ocean vents?) cutting down on the apparently huge number of opportunities. Or it could just be extraordinarily improbable, e.g. it has a one in a googol chance of happening each time the conditions are met. Maybe it requires quantum tunneling?
That said, I think you’re too quick to dismiss the possibility that multi-cellular life is highly improbable. We know that for 2 billion years, the oceans were full of single-celled creatures, a huge number of generations, without evolving multi-cellularity. I don’t see any reason to think that process was probable: on the contrary, a number of the steps seem (naively) to be highly improbable.
I would not be particularly surprised to learn that we live in a universe where many planets have single-celled (or equivalent) organisms, but vanishingly few have multi-celled (or equivalent) organisms.
Multicellularity evolved independently 46+ times (someone else linked this earlier), so I don’t think it’s implausible, given the right conditions. Perhaps it needed specific conditions regarding temperature, oxygen levels, whatnot? Or perhaps once there is multicellular life, it creates stronger evolutionary pressures for other organisms to evolve multicellularaity, too? Though I’d guess it has advantages even if there’s no other multicellular organism.
As for ample opportunities for life to develop: Even if we count all cubic nanometers of the oceans of all planets in the observable universe, times the nanoseconds since Big Bang, we get something like 10^110. If the probability of a self-replicating combination arising randomly is 1 in 10^200, it gives probability of 1 in 10^90. The best objection to this is that we exist. So this would require that either the information content of a self-replicating structure (i.e. the number of combinations that have to be tried) is in a very narrow range (on a logarithmic scale) so that it happened at least once but not a lot of times, or that the entire universe (or perhaps multiverse) is vastly bigger than the observable universe. We’ve discussed this below.
Looking at the list, it looks like there’s some generous scoring to get to 46. A lot of the explicit examples (Cyanobacteria, Myxobacteria, etc) appear to be colony-forming with a degree of specialization of roles within the colony, but not really comparable to what wikipedia describes as “complex multicellular organisms”.
There also might be some semi-double counting among complex multicellular organisms, where closely related lineages may have evolved complex multicellularity separately but not really independently: kinda like how bees might have evolved eusociality separately from ants and social wasps, but they’re so closely related that the seperate evolution isn’t really independent the way eusociality of naked mole rats evolved independently.
Even with those caveats, there’s definitely strong evidence of multiple independent evolution of multicellularity (without looking at the list in detail, I’d guess on the order of 5-10), enough so that the core point of “multicellularity probably isn’t the hard part” seems to hold.
But that’s the crux of the issue. Going through this thread there’s at least decent arguments that NONE of the early steps(i.e. before human intelligence) were really that difficult.
The only one I haven’t seen any discussion on yet (unless I missed it) is “Rare Cambrian Explosion” so the step from eukaryotic life to animal-like creatures.
The steps that seem likeliest to be hard, based on my read of the thread, are abiogenesis (in its broad sense, referring to the entire process of getting from primordial soup to something we’d recognize as a prokaryote), followed by eukaryogenesis.
Apart from the development of complex multicellular organisms, what’s the hard part of the Cambrian Explosion? Development of the mesoderm layer?
Well at least I helped to sway someone somewhat against it being the animal -> human level intelligence step. Usually in these sorts of discussions, like among the Youtube community that discusses the Fermi Paradox, that seems to be the consensus view (and I think that consensus is a bit stronger than it should be even if it’s ultimately true).
And I don’t know exactly what would be difficult about the Cambrian Explosion but to me it all seems so fast.
It goes from “The Blob” like creatures of the Ediacaran:
https://en.wikipedia.org/wiki/Ediacaran_biota
To basically all the creatures we know and recognize today. In a very short time span(~100 MYA or so if I recall but very fast changes throughout that period).
Wiki does list some decent causes of it:
https://en.wikipedia.org/wiki/Cambrian_explosion#Possible_causes
My take from reading that is sort of like “Once you have multicellularity it’s actually pretty likely you get something like the Cambrian Explosion because genes can start being used to code for multicellular complexity, and it’s almost certain they will be”
I don’t agree with the papers conclusion, because it seems to ignore our existence.
If I read the paper correctly, the simulate the ex ante distribution (given our uncertainties in the Drake equation) of the number of intelligent civilizations (and there’s a decent chance it’s zero), and then condition on various interpretations of “we haven’t seen anyone” else, and get an even higher chance of zero. But they then interpret the “0” outcome as: we are alone, there are no _other_ intelligences. For instance, they say “The probability of N <10^−10 (such that we are alone in the observable universe) is 10%"
This seems wrong to me. We should condition on "there's at least one intelligence" (us). But then this blunts their conclusion; their own simulation seems to show that there is only 2% chance we are alone (N = 1) (and that's just in this galaxy).
To argue my complaint another way… The Drake equation multiplies big and small numbers with many orders of magnitude of uncertainty. They correctly show that the expected mean is a pretty useless number; there will be very many orders of magnitude of uncertainty in the overall result. A priori it will be quite plausible that the product, if we had the right probabilities, would be incredibly small: 10^-{something big}. Likewise, it could well be something huge: 10^{+ big}. But it seems 10^-big is basically off the table, given us. "Us alone, given us" seems to work if the expected number of intelligent civilizations to be very constrained to O(1), maybe 10^-2 to 10^2, which is pretty narrow compared to all the other "+ big" possibilities.
The universe is bigger (perhaps much bigger) than the observable universe. The 10^-big numbers refer to the observable universe; what we know is that there life on at least one planet in the entire universe, so the probability of that is unlikely to be very small. There may be vast parts of the universe where there is no life within the part of the universe observable from that point.
I said and mostly believe that not conditioning on “N >= 1” is a simple logical error,
though your comment gives me a lot to think about.
If I understand you correctly (and I do not understand you fully) you are making a clever subtle argument as to why it isn’t. I would first challenge you: is your argument
so very obvious that no mention, no hint of it, belongs in the paper? As I read it, and maybe I missed something, there isn’t even a hint that ‘our existence’ is a relevant fact (or that it might not be); it’s just ignored.
Second: they devote quite a bit of time to conditioning on the Fermi observation (i.e.: we haven’t detected life). Taking the paper on its own terms, why is this an interesting event to condition on whereas ‘we are here’ isn’t? If you take it as possible that other life might well sterilize/populate its reachable sphere, doesn’t the fact that we are around still and asking these questions increase the chance that we are alone, and thus not worth conditioning on (or is not easy to)?
Third, well, I’m fuzzy on this and it would be pointless/insanely-lengthy to try to draw out my tentative objections. I think your argument is something like there’s an effective multiverse (only with non-reachable ‘partitions’ of the universe instead) plus an appeal to a species-level anthropic principle; is that right? If there’s a definitive treatment of how to work with probabilities taking these issues seriously, I’d love a pointer.
TL;DR. I still think: The paper must either condition on ‘we are here’ or tell us why it isn’t necessary. Maybe, just maybe, it isn’t (you haven’t convinced me so) but the paper is defective without addressing this.
Shorter:
Your defense fails if there is no speed-of-light limitation; if the observable universe = the reachable universe; if everything beyond the milky way is a painted wall. But where do the paper’s arguments incorporate the assumption that none of these hold; if they don’t, your argument falls or their arguments are wrong.
My argument fails if there is no speed-of-light limitation, or if everything beyond the milky way is a painted wall. I’m not sure what you mean by reachable universe; what my argument requires is that only a tiny portion of the universe could have reached us by now, had intelligent life evolved there.
The paper doesn’t condition on our existence. They either assumed my argument as obvious or, more likely, didn’t consider conditioning on our existence at all – both are consistent with the paper, and neither contradicts my argument. If they considered conditioning on our existence, but assumed that the opposite of my argument is true (that is, we can actually assume that N>=1), that’s inconsistent with the fact that they don’t actually condition on N>=1 in the paper.
Most discussion of the Fermi paradox posits that the parameters are supposedly unlikely to be very small, yielding N>>1 and creating a paradox. So perhaps they didn’t think about what N<=1. (Note that even if the observable universe was the entire universe, that wouldn’t strictly imply that N>=1; if e.g. N=0.5, perhaps we were just lucky. If N was very small, that would be suspicious.)
Thanks. I don’t think we disagree as such, but you’ve given me a lot to think about.
First, ‘reachable universe’ was a bug, I meant ‘entire universe’.
Second, N_actual >= 1! But yes, N_expected could be 0.5 (I suggested above that we should be surprised if N_expected is outside [10^-2,10^2])
Third, IMO it’s just silly to think that your argument is so obvious it doesn’t need including in that paper! Even if fully correct, it’s pretty subtle (and made more so if you want to argue that conditioning on the Fermi observation, as the paper does, is ‘obviously’ unproblematic). Either I (and perhaps you too) are making a basic error, or the paper is wrong in not conditioning on N >=1, or the paper is written unacceptably badly in not touching on their argument for skipping this.
Fourth, I’m remain unconvinced about your conclusion (not worth rambling on as to why) but it would be really great to see a
formal/careful treatment of the subject that takes your perspective seriously. Or even (I think fairly different, but in the same family) the ‘Fermi Paradox in a Multi-verse’ question.
More-or-less. A multiverse would be another explanation why N<<1 could be possible and consistent with our existence. But the universe is usually assumed to have no boundary, which implies that it's bigger than the observable universe (though we have no idea how much) even without supposing a multiverse.
We might expect more consistency across a universe much larger than what we can see – things we learn about what we see (still a pretty large sample!) might have a bigger chance (no idea how to quantify that) of generalizing to countless unobservable segments of our universe than to other universes. Laws of physics, at least?
We do discuss observer selection effects briefly in Supplement II. The main takeaway is that we cannot conclude much from our own existence and biosphere since it is conditioned to produce at least one observer.
Pr[N=1|we exist] = Pr[we exist|N=1]Pr[N=1]/Pr[we exist] = Pr[N=1]/Pr[we exist]
There are some interesting considerations if one makes a hard steps model a la Carter and subsequent papers, where we might infer some posteriors for the hard step difficulty given Earth’s life history – I haven’t seen that fed into a Drake-like model directly. Might be worth doing.
1) So much for “the results of real science can be replicated.”
2) CJ Cherryh’s novella Pots is recommended for all and sundry.
Another candidate for an Early Filter is Oxygen Catastrophe -> Snowball Earth. In Earth’s history, the oxygenation of the atmosphere is believed to have caused a mass extinction of anaerobic bacteria, since oxygen is toxic by default. And in addition, the process of oxygenating the atmosphere reduced CO2 and CH4 concentrations by several orders of magnitude, causing global cooling and a large-scale glaciation event which may have (this part is disputed) iced over the oceans completely for a while before new CO2 built up in the atmosphere from volcanic eruptions.
I’m not sure how close-run it was, but it seems plausible from what I’ve read it seems plausible that a bit less solar input or a bit less volcanic activity could have lead to the Earth staying iced over.
I suppose this is more a candidate for a “pretty good filter” (weeding out something like 90% of life-bearing planets) than a true “great filter” that explains the Fermi Paradox all by itself.
It’s unlikely to explain the paradox, but such things may partly explain why it took so long for complex life to develop, as evolution was limited for some long periods, and some earlier progress may have been wiped out.
This seems too Earth specific to have much explanatory power. Even if Earth did experience total glaciation, another planet slightly closer to the Sun would not have. Unless we’re saying that only planets situated exactly like Earth can evolve life, in which case we’re just saying that n_e is low.
The broader claim would be that there are lots of things which might happen to wipe out life before it could evolve intelligence, i.e. f_i is low because most life-bearing planets experience total extinction events too early to evolve intelligence.
You could phrase it that way. My hypothesis is that an oxygen catastrophe leading to a life-ending snowball glaciation provides a lower limit to the amount of solar input an earthlike planet can have and be suitable for the development of life more advanced than cyanobacteria, and that that Earth seems to have squeaked by just over that lower limit.
The second part of my hypothesis, which I neglected to include in my first post, is that there’s also an upper limit to solar input, lower than the current habitable zone of our sun would seem imply. I have no idea how far away Earth was from that upper limit, and as such I don’t know how narrow the space between my two hypothesized limits are.
Main sequence stars get brighter over the course of their lifetimes, by quite a bit. The early sun was only about 70% as bright as the modern sun, which is part of why the early Earth needed a substantial greenhouse effect to keep temperatures high enough to maintain liquid oceans. And as has been discussed in other threads, the Earth may only be half a billion to a billion years away from getting cooked as the sun continues to get brighter.
So yes, an Earthlike planet would be much less likely to snowball if it’s slightly closer to its primary, but it will also have a correspondingly shorter time for life on the planet to develop and evolve multicellularity, intelligence, etc. And we don’t have a good idea how much luck was involved in Earth’s life having enough time to evolve a technological civilization during Earth’s window of habitability.
My favorite hobby-horse strikes again: think about the entire distribution, not just one summary statistic!
There’s also the idea of The Great Filter, which is basically a sort of wall in which all species must go through to attain intelligent life, and subsequently space travel and conquest. It boils down to three possibilities, where we’re either the special guys (where the filter is behind us), we’re one of the first (in which there is no filter) or we’re all fucked (in which the filter is in front of us). That last one could just be as simple a situation as global warming killing all species before they realize their impacts on the world, or some kinda “reaper” alien civilisation where they hide from us until we near the space conquest era, in which they insta-gib us. There’s also the idea that the hyper-advanced alien civilisations just sorta uploaded their brains to the virtual world, with automated everythings. Buncha theories like that that would easily explain why we’ve received no contact.
Learned about all this here
I’d like some comment on one thought I’ve had on the Fermi paradox.
The Fermi paradox is based on the lack of detectable radio transmissions. What if life stops making detectable radio transmissions. Earth might go dark on a cosmic scale within the next few decades for several reasons.
First, signals are getting weaker. We use cell phones and wireless network signals. Radio and TV broadcasts are less and less relevant. For long ranges, we use either cables or relays of weaker signals. If only these weaker signals are used in the future, could they be detected by alien civilizations?
Second, we are using digital signals with compression and encryption. Compressed and encrypted data are very similar to random noise. Could this random noise be identified by another civilization as a product of intelligent life?
If each civilization only uses the type of radio signal detectable at interstellar ranges for a century or two, this would make the fermi paradox a lot less surprising.
The Fermi Paradox in one form is like that, and your explanation would be adequate. However, in a stronger form it makes sense that advanced civilizations would send out exploratory probes that report on resources, or even terraform worlds for the civ (if necessary, else it just is a paperclipping AI that is aslo easily detectable) such that it can be colonized. These things would be easy to detect compared to a short length of radio transmissions.
The problem with the stronger form is that it doesn’t actually make that much sense, given the distances and speeds involved.
Getting anywhere in space takes forever, sending back any signals takes decades at best, sending any instructions from mission control doubles the time. Colonization operates on civilization-scale time-frames (millennia to get to your destination, if it isn’t right next door).
That’s a whole lot of time and effort for a highly uncertain payoff. Paperclip-maximizing AIs are no better, because the idea assumes an intelligence smart enough to overcome all obstacles, but at the same time dumb enough to be unable to alter its teleology (frankly, this is a problem with all “paperclip maximizer” arguments).
Spotting someone over interstellar distances, unlikely though it may be, still seems orders of magnitude more probable than actually running into them.
It doesn’t have to be the whole group that makes the decision. It just happens faster that way. If you have to wait for private groups to decide to go colonize the next world, it means you need another ~century for private groups to accumulate the wealth. Leaving for another star system is similar in wealth requirements.
Civilizations have certainly built walls to stop their citizens from leaving. They’ve restricted wealth accumulation by their citizens. It’s easy to imagine any particular civilization locking things down. But you need a universal decree that every civilization is forced to.
The “any civilization smart enough to colonize the stars will be smart enough to agree with me that colonizing the stars is stupid” argument is charming, in its way.
“Getting anywhere in space takes forever”
It takes forever relative to a human life, not to geologic time. I’ve heard estimates that it’s physically possible to turn the whole galaxy into dyson spheres within a few million years. A few million years is a lot faster than the other steps we’ve been talking about such as the evolution of Eukaryotes. It’d be a very minor filter/blocker if anything.
We also should be able to easily detect a(far away) galaxy that was turned into a dyson swarm. My understanding is that there’s been quite a bit of research in this realm and the result was “No galaxies have been turned into dyson swarms”
My take is not that this means there are no intelligent aliens. Rather it means intelligent aliens don’t build dyson spheres (for some other reason other than plausibility/difficulty).
Milky Way is 100k light years across. How long would it take a nanobot swarm to build a dyson sphere? And they don’t have to finish before they send out the next wave to the next star.
You have exponential growth, whereas the volume you can reach expands at the cube. So you’re pretty much going to expand as fast as you can. The area of control would probably grow at some appreciable fraction of the speed of light. A million years seems much too long.
Intergalactic takes a bit longer. Andromeda is 2.5MLY away. Diameter of the local group is 10 MLY. But still, 10M years is peanuts on a geological timescale.
@Edward Scizorhands
I think you’re a bit optimistic about the scale of effort needed for interplanetary travel.
To put things in perspective: either you’re going slowly enough to require millions of years before reaching the destination, or you’re going fast enough that running into anything along the way (and we aren’t talking about big objects here) will really ruin your day. That’s a credible failure mode right there. Even if you decide to chance it, you’re still likely to have hundreds or thousands of years for things to go wrong before you get where you’re going.
The fewer the ships you send, the less chance that any of them actually survive the trip. I wouldn’t give a solitary Mormon ark (or compatible) a snowball’s chance in hell of anything other than a quiet demise somewhere in the void.
It gets even more fun when you consider that unless you get it right on the first try, you’ll be waiting a long time to find out that you got it wrong (and you may never find out why). Based on our experience of practical engineering, the way to produce reliable models is continued refinement and iteration. However, the time scales involved in interstellar travel are such that by the time you complete a successful trip, thousands of years will have passed. We are literally looking at lifetimes of civilizations, not private groups. Sustaining the effort would require continued commitment over millennia.
It’s not impossible for a subset of a civilization to maintain this commitment (especially in the face of repeated failures), but incredibly unlikely unless there’s overwhelming benefit to doing so. All it takes is for one generation of leadership to say “Screw that! We’ve got more important things to worry about.”
@Syx78
Geological time is fine when looking at geological processes, but that’s not what we’re talking about.
You could, of course, postulate a species that exists on geological timescales and can afford to wait that long, but William of Ockham will be giving you a disapproving look. Understand that what you’re doing is postulating the equivalent of what Charlie Stross (linked to elsewhere in the thread) describes as a “magic wand”: arbitrary assumptions that will allow you to dismiss objections. I’ll grant that we cannot say that a species with a lifetime measured in thousands or millions of years doesn’t exist somewhere in the universe, but we also have no reason to believe it does.
This is another reason why I propose that interstellar colonization requires a civilization-level commitment, because civilizations (or even meta-civilizations, consisting in a number of subsequent civilizations sharing the same historical heritage) are the only “intelligent” entities we know of that could conceivably exist that long.
A couple of further words about self-replicating probes that everyone likes so much. First off, as Stross points out, they’re pretty “magic-wandy” as it is – given that they have all the problems of interstellar ships, but are also expected to be able to function completely autonomously (remote control is not an option given the delays), to reliably produce perfect copies of themselves and to be able to evolve its operation/programming to meet challenges you couldn’t possibly foresee – but not so much that it decides it doesn’t care about the mission. That’s not the problem.
The problem is what would they be for?
Sure, you could send them out to explore – and then wait generations to get any information. You could have them terraform other planets, I suppose – and wait millions of years for them to finish, plus time for colonists to actually get there. The only way it makes sense is if you’re prepared to wait for generations and expect that not only whoever exists in the future will care about the mission as much as you did (enough to be listening and actually do something about it), but that the payoff will be worth the bother.
Now, it so happens that we know of at least one possible way of making “self-replicating probes” – we call it “life” and panspermia is very much a known theory.
The main problem with that, from a teleological perspective, is: what’s in it for the sender? Life is pretty good at gaining a foothold anywhere it can (and places we wouldn’t think it could), but it’s also fundamentally selfish – self-replication and propagation tends to trump other concerns. You can send lifeforms into the vast galaxy to prove that you can, but you’re not likely to get anything out of it.
A final point cribbed from Stross that’s worth keeping in mind, as we’re considering teleology is: there are vast areas of the Earth that are hugely easier to colonize than other planets in the solar system, let alone exoplanets, yet we’re perfectly content to leave them alone – despite the fact that we have the requisite technology.
Stross offers the example of the Gobi Desert, while I’ll offer that we have a whole huge, empty continent down south. Antarctica isn’t exactly welcoming, but it sure beats anything to be found in nearby space – for a start, it has abundant supplies of both air and water. So why isn’t there a rush for Antarctica?
As Stross points out: “there’s no ‘there’ there”. Antarctica is a frozen wasteland. There might be resources that we could mine there, but they’re only worth going after if there are no lower-hanging fruit. Even then, an isolated extraction facility (such as a drilling platform) makes infinitely more sense than a colony.
It’s not that we can’t colonize Antarctica, we simply won’t.
Given that we aren’t colonizing every last inch of Earth “just because”, why would we expect other species to colonize every last inch of the galaxy for no better reason?
Sometimes the boring answer is our best bet.
This pretty well formalizes the way I’ve always thought about the drake equation. I’ve always thought it was kind of meaningless since we don’t really know anything about many of the parameters. Here’s how I’ve always, more informally thought about our uncertainty of the parameters, with some comments about them that I haven’t seen before:
R∗, the average rate of star formations, in our galaxy
fp, the fraction of formed stars that have planets
ne for stars that have planets, the average number of planets that can potentially support life
fl, the fraction of those planets that actually develop life
fi, the fraction of planets bearing life on which intelligent, civilized life, has developed
fc, the fraction of these civilizations that have developed communications, i.e., technologies that release detectable signs into space
L, the length of time over which such civilizations release detectable signals
preface: my area of expertise is chemistry, so I have a lot more to say about the relevant parameters than the others.
R*: AFAIK we know this fairly well.
fp: We didn’t know until fairly recently but now we are confident that this is fairly high
ne: Unclear, since we don’t know how diverse the conditions that can support life/abiogenesis are. Maybe it really requires an earth-like planet. Maybe it’s also possible somewhere like Titan, Europa, or Neptune, etc, with some other kind of self-replicating biochemistry that we don’t know about. Given that with our current chemical knowledge, we could not a priori imagine earth biochemistry or the compounds that comprises it, and even if we were given the list of structures, we could not a priori predict whether or not the self replication would work. Therefore, we have no idea whether the abiotic processes on non-earth-like planets could undergo abiogenesis. And why would the probability of abiogenesis be uniform across different world types. Maybe the rate on an earth-like planet is 100 times that on a europa-like planet and 10^50 that on a mars-like planet, owing to differences in the types of biochemistries possible there and how easy a path to them from the naturally occurring abiotic processes is. So really, calculating the ne/fl portion will have to take into account the probability of formation of each type of planet with regards to its chemistry, and the fl rate associated with that.
fl: we honestly have no freaking clue at all.
fi: again, we don’t know jack.
fc: we don’t really know, but I can’t imagine that something you could call an intelligence won’t transmit information in some way. Who knows if it will be in the form we are looking for, though.
L: don’t know, but all other things being equal, since intelligent life can adapt on much faster timescales than nonintelligent life, it should be much less susceptible to extinction rather than more, but I admit that that would be my assumption.
This result ignores Katja Grace result about SIA doomsday (https://meteuphoric.com/2010/03/23/sia-doomsday-the-filter-is-ahead/), that tell us, in short, that if there are two types of the universes, one where rare Earth is true, and another, where civilizations are common but die because of the Late Great Filter, we are more likely to be in the second type of the Universe.
Grace’s argument becomes especially strong if we assume that all variability is because of pure randomness variation of some parameters because we should find ourselves in the Universes where all parameters are optimised for creation many civilizations of our type.
Thus Fermi Paradox is far from being solved.
We don’t have priors for rare Earth vs Late Great Filter. If we assume they’re equally likely, then updating according to our own existence favors LGF, but that assumption isn’t necessarily valid. For instance, suppose in the future we determined that multi-cellular life could be expected once per every ten thousand galaxies… now rare Earth is looking pretty good.
This is my conclusion as well. The vast majority of civs exist in universes optimized for producing civs (either by chance, or by choice of some power). And since civs either self-destruct or hit the singularity and take over their universe (or future light cone), the vast majority of civs will find themselves existing within the domains of post-singularity civs that (for whatever reason) want to generate lots of other civs.
The number of civs that find themselves on the eve of singularity, alone, in a fairly old universe, in base reality, will be tiny compared to the civs that find themselves apparently in that position.
There are way more planets with life in the Star Trek universe than a universe where only one planet has life. Therefore we live in the Star Trek universe (p < .001).
I don’t follow this at all.
“Imagine we knew God flipped a coin. If it came up heads, He made 10 billion alien civilization. If it came up tails, He made none besides Earth. Using our one parameter Drake Equation, we determine that on average there should be 5 billion alien civilizations. Since we see zero, that’s quite the paradox, isn’t it?
No. In this case the mean is meaningless.”
I don’t have a problem conceptualized various distributions where the mean is meaningless, BUT what are we thinking is the causal agent for the mean being meaningless in this situation?
Are we thinking that there’s ACTUALLY a God flipping a coin?
What’s the explanation for why we think such a distribution is(/or might be) at play here?
————-
If I’m being really dense and I missed the explanation of that somewhere in the post, I apologize
Best I can tell, the ‘causal agent’ is the combination of low probabilities and uncertainty about them. In any system of low probabilities which are multiplied, underestimating a fraction (i.e. p being 0.19999 instead of 0.1 in the toy example) doesn’t do much: E(N) in the Drake equation changes by a factor of 2.
Whereas each time you’re wrong in the other direction (p = 0.00001 instead of p = 0.1) E(N) in the Drake equation changes by orders of magnitude. Given enough fractions to be multiplied, you’re highly likely to have severely overestimated at least one with p = 0.1 [0; 0.2] and so the resulting distribution is highly skewed.
Is that different from the idea that there must be a Great Filter (or maybe even a few Great Filters)?
——
——
Just because my wheels are spinning, how common do we think meteors like the one that killed off the dinosaurs are?
That appears to be a 1/something on the order 230 million years event right?
Would we have evolved to become earth’s dominant species w/o that event?
Meteors aren’t the only thing that can cause a mass extinction. For instance the oxygen event caused a mass extinction and the Permian extinction was probably caused by a SuperVolcano.
Still, no extinction seems to have done much to push back on the smarter creatures. Not even the smartest branch of dinosaurs (if not the smartest species from that branch) died off with the meteor. Corvids and Parrots are therapod dinosaurs. Also mammals/amniotes in general survived that extinction. This makes meteors/supervolcanoes/mass extinctions a minor filter at best. Something that only some civilizations will be blocked by as opposed to a major filter which would block like 99.9999% of them.
I’ve heard some argument though that the Earth is in the “galactic suburbs” if it were closer to the center, where most stars happen to be, it may be baked in radiation and have much more frequent extinction events caused by things like Gamma Ray Bursts. This lowers the number of stars that could host life dramatically(although the number is still very big especially over the whole universe/ we should still be able to detect dyson spheres in some other galaxies)
Not relevant to discussion, but it’s neat how you and Conrad made the same point at just about the same time. I refreshed to 2 new comments far apart in the thread but with the same point.
Seems we posted within ~30 seconds of eachother, definitely impressive!
Still does anyone have a source explaining the whole “Third Generation Star” thing?
Even if younger stars (as in when they arose not their actual age) are more metal rich, shouldn’t some rare but not super rare events produce comparatively metal-rich stars earlier in the Universe? Also the sun is younger but there are still plenty of stars that are even younger (they had less time, but it’s conceivable some of these steps could be done faster and take ~3 billion years instead of ~4.5 billion).
I’m also sure there are plenty of stars with simialr metallicity not way older than the sun but maybe ~a billion years older. Although if anyone has links kind of showing “The Sun really is the first star of this metallicity” I’d be very interested. In fact I’m so interested I’m going to research this topic quite a bit more and see what comes up.
I was thinking of it as a filter in the opposite direction
maybe the asteroid created conditions giving smaller smarter mammals a chance,
maybe without the asteroid, all of our ancestors would have gotten eaten by stronger faster creatures
Maybe nature doesn’t select for intelligence as much as we like to think
maybe the lack of asteroids are the filter
I’ve come across arguments that our evolution required a fairly specific number of asteroids: too few and evolution doesn’t get shocked out of equilibrea often enough to produce highly intelligent species, and too many and things get reset too often for highly intelligent species to evolve.
IIRC, the argument also had a bit about Jupiter had a role in producing our rate of large impact, by diverting a significant fraction of incoming bodies from the outer solar system.
Yes, it’s different from the great filter. Check this explanation, which is better than mine:
https://slatestarcodex.com/2018/07/03/ssc-journal-club-dissolving-the-fermi-paradox/#comment-645052
It might help to consider the other extreme. If there is a linear uniform distribution of values between one and a godzillion, the expected value is half a godzillion. But the odds of that being the right answer are minute. The odds are only fifty-fifty that the right answer is within a quarter of a godzillion of half a godzillion, and 25% that it’s less than a quarter of a godzillion.
That’s still a lot. But if you agree that it doesn’t make sense for the distribution to be linear uniform, because our best guesses of things are merely orders of magnitude, then consider a log-uniform distribution. If N is log-10 of a godzillion, then the expected value is 10**(N/2) and you have a 25% chance that the true answer is under 10**(N/4).
So if your expected value is a million, there’s still a 25% that the true number is under a thousand. If your expected value is a thousand, there’s a 25% chance that the true value is under 32.
Any argument that starts, “So we expect there to be a million technical civilizations in the galaxy, and surely one of them would expand to the stars,” must deal with the fact you are two coin flips away from the true number being less than a thousand, and three coin flips away from it being less than 32, so the “surely” part starts to look pretty thin — especially when you condition it on the observation that nobody has in fact taken over the galaxy. The Fermi Paradox looks paradoxical only if we treat the expected value as the true value and reason from there.
It would be a lot different if there were some reason to say, “There’s 95% chance that the number of technical civilizations is between 500,000 and 1,500,000.” But on this distribution there isn’t, and we don’t have much reason to posit any other distribution.
(Actually, in the paper they assume a log-uniform distribution for each factor, which leads to a log-normal distribution for the final answer. But I think the intuition is similar. Anders will surely correct me if I’m wrong.)
Has anyonen else read this paper on the non-gravitational acceleration of the interstellar object Oumuamua? It seems to rule out everything except cometary outgassing, but notes that even the outgassing hypothesis seems to have a lot of issues (in particular, no visible coma which should be present given the amount of material needed to produce the observed acceleration). I’m also not clear on how outgassing produced an acceleration away from the sun when we know that the object is elongated and tumbling fairly rapidly. I’m probably missing something, but it feels a little bit like the academic paper version of “we’re not saying it’s aliens but…” https://drive.google.com/open?id=1TP5qbkQlfE4Yb1X_yIGGsVPHAClCYlt2
Even if Oumuamua isn’t a probe, I find it very curious that Earth hasn’t been visited by a probe in the past. Like in theory I agree with what Carl Sagan said that we should look for the possibility of Ancient Aliens. However, after researching Ancient Aliens quite a bit I feel I can conclusively say “There were no aliens that visited Earth” I also almost feel confident in saying, due to the lack of detection of obscure metals, “There were not even unusual probes in the Permian”
My take is still “Dimensional rifting or [unknown unpredictable black swan future tech here] is so much better than universal expansion that everyone does it”
I’m not sure the lack of found probes counts is meaningful. The oldest stretch of the earth’s surface is only 2 million years old. A probe that landed 10 million years ago would have been buried, turned over, crushed and churned with anything left of it sitting a mile below the seafloor.
Yea, I’m certain of no aliens in the historical record but not entirely sure of “No Aliens on the Earth’s surface in the Permian”
It sounds somewhat plausible. My main argument against comes from a similar argument about “How do we know humans are the first technological civilization on Earth?” Most arguments in topics like that are that we should be able to detect some incredibly bizzare chemicals or the like if such a civilization had existed. I’d say that this is likely for probes as well. Although one probe is of course miniscule compared to a city.
Still we ought to expect something like millions of probes given the age of the earth/universe/etc.
We also don’t see any probes on the surface of the moon, which we’ve imaged pretty well.
Sorry to butt in on your speculation about alien visitors.
Um, what?
I’m pretty sure that the fossil record goes back quite a bit further than 2 million years.
I think you’re conflating the general deposition/weathering effects and subduction. Sure, such a probe might well be buried, but wikipedia says:
Crust_(geology)
Unless what you’re thinking is that the typical ancient alien will police their brass.
Carry on.
I’ve believed this for a long time. Considering every planet we find is some kind of weird methane-covered diamond world or giant blob of tapioca pudding, it’s just seemed likely the drake equation was way, waaaaaaaay too generous. It seemed possible the conditions necessary for life to occur are so astronomically rare that it may have only happened once in 15 billion years.
On top of that, it also seems highly likely that interstellar space travel is so difficult it will never be worth bothering with, barring the future invention of magic.
Maybe Fermi’s paradox can be resolved using something similar to anthropic principle: “If we weren’t seemingly alone in the universe, we wouldn’t be asking the question why are we seemingly alone in the universe”. Suppose that space and time are full of civilizations. 99% of these civilizations would be able to see other civilizations. So they wouldn’t wonder why they’re alone since they obviously wouldn’t be. But there would be that 1% of civilizations that for some reason wouldn’t see anyone. So they would be the only ones wondering why are they alone.
Assuming technological progress in Alien planets parallels that of Earth, you need to then also assume a 200 year window for radio technology with Relation to SETI.
The odds to find anything are low
Right, but only in the narrowest of astronomical minds are radio and SETI terribly important. The Fermi paradox was originally about starships and colonies, and we have since added Dyson shells to the list of things the universe is conspicuously missing. And stars blinking green.
I’m not seeing what’s so groundbreaking about this, instead of using a set of guesstimated probabilities as in the standard drake equation, they use a set of guesstimated probability ranges. The problem is still that these probabilities are just guesstimates, it was always possible to guesstimate a sufficiently low probability for any one of them and “resolve the paradox.” In this sense, there was never a paradox, as it requires no leap of faith to accept that they are low enough to be consistent with what we’ve observed in the galaxy, that there seems to be nothing. However, you can still argue that this finding is not what should have been logically expected. Suppose that a pre-modern islander was able to see the outline of other islands, but could not visit them or observe them too closely. Should the null hypothesis have been that they were inhabited much like his own, or that his island was special?
Unless I’m completely misunderstanding, this paper appears to simply push the Great Filter into our deep past as opposed to our future. Also echoing what @Alexander Turok says above, it’s replaced one set of assumptions for another more pessimistic set.
I had a conversation about this with a friend of mine who is far more math-literate (and familiar with the subject matter) than I, and he mentioned some things which made me question the legitimacy of the conclusions you draw here, Scott. In particular, if the Drake equation they’re using includes great filters, it can hardly be the case that this calculation rules out the need for great filters to explain why we seem to be alone in the galaxy, can it?
I’m not sure I understand your objection. Scott’s post didn’t mention great filters at all, that I can see.
The Great Filter argument has basically always been about considering The Silence and concluding that people who get big numbers out of the Drake Equation must be grossly overestimating some parameter or another, and analyzing which parameter(s) that might be.
If my understanding of lognormals is correct, doesn’t this render a single Great Filter unlikely?
There are more ways for each step to be somewhat unlikely than for a single step to be almost impossible.
I.e. it’s more likely that 4 values that multiply to 10^-12 are all 10^-3, than one is 10^-12 and the rest are 1.
While that might be true, most people would also agree that we hardly know anything about most parameters in the Drake equation, and that we could easily be alone if the parameters are different. There’s definitely no consensus that the Drake equation suggests a vanishingly small chance of an empty galaxy. Even the Sandberg et al 2018 paper says this:
It should be no surprise that if you choose high numbers for the parameters, you get lots of civilizations, and if you choose low numbers, you get very few. If you have no idea whether the numbers are high or low–and the authors certainly think we have no idea, as they allow the probability of life arising to vary by more than 50 orders of magnitude–you’d have no idea whether we’re alone in the galaxy or not, and consequently assign significant probability to each. This has nothing to do with single values vs. synthetic point estimates. The results that the authors get from their Monte Carlo simulations just express, in fancier words, the basic intuition that ignorance should translate into open minds. A plain English summary of the conclusion would be “we have no idea what the parameters are, so we have no idea whether we’re alone and shouldn’t be talking about Fermi paradoxes.”
Given the limited speed of light, I suppose that communication and coordination across galaxy is more or less impossible. You can make sure that individual parts of the empire follow the same values and strategy by making a powerful artificial intelligence, and sending a copy of it everywhere. Thus the same rules are enforced everywhere.
But if something new happens on one end of the galaxy, the speed of light makes it impossible to communicate a consensus of your civilization. Either the strategy is decided in advance, or different parts of empire are going to react widely differently. Imagine how this applies to the contact with an alien civilization: you cannot even make a peace treaty with them, because there are always some parts of your empire that strongly believe they need to be exterminated (unless you have a precommitment that no type of aliens get exterminated ever, which brings another bag of problems).
So the only safe strategy is to eliminate all aliens. Simply because you have no idea what kind of danger they may pose, and you are unable to coordinate the defense.
But then you lose all possible advantages of trade. Well, trading resources is not important: the universe is built of the same materials pretty much everywhere; the aliens cannot give you any minerals you couldn’t just as well mine yourself. The only valuable thing to trade with the aliens is their technology — maybe their alien ways of thinking allowed them to invent something you would never think about. (However, there is also the danger that they would develop a weapon whose nature is incomprehensive to you.)
So the best choice is to eliminate the aliens from their system, and convert the local system to a giant simulator. Then, simulate the same aliens, as if they were alone in the universe. They are harmless to you, and you can observe how they evolve, and steal any technology they develop.
If you seem to be alone in the whole universe, you are probably already in this type of simulation.
Too many comments for me to read with all the other stuff I have going on. Is there a potential conclusion from all this debate, or at least a number of schools of thought?
I think a monte-carlo probability approach is absolutely appropriate for the Drake equation, but of course it still presents us with the challenge of having at least one Civ in our Galaxy not zero. So this raises the possibility of life at least elsewhere pretty high given the number of planets we now have observed. My view, for what it is worth, is that the answer lies in the multi-verse. If we believe that any advanced technological civilisation either colonises the Galaxy or destroys itself (or otherwise decays) then we would expect all observers to find themselves alone in their own galaxy because the first will take all the planets that could have developed civilisations. In other words if you are not first, you don’t exist. It’s like us worrying about the future technology civilisation that could have happened in Australia descended from Kangaroos – it won’t happen because we got there first (unless we die off). Likewise if all habitable planets end up colonised by earth people, there will be no aliens, because the planets they could have developed on were already taken (unless earth people are already dead which resolves the paradox another way). Of course there is a probable subset of multi-verses with zoos and berserkers, but I would bet they are a small percentage of the whole. Berserkers are unnecessary if you have already colonised the Galaxy before anyone else evolved – they are only necessary in the highly improbable case of simultaneous development of multiple civilisations at the same time. Zoos just strike me as silly – all that real estate being wasted for billions of years in the off chance that life may evolve intelligence?
Mmm, no, not really. That we exist means that absolutely none of the terms in the Drake Equation can plausibly have a value of literally zero. If one or a combination is so small that the odds of our Galaxy having one is less than one in 250 billion, it means this Galaxy was lucky, but doesn’t mean we were. If the odds of any Galaxy having one is less than 5*10^20 (stars times galaxies) , then it means we were lucky, but there’s no particular reason that calls for explanation — if we weren’t lucky, nobody would be there to notice.
Yes but zero is one thing, one is another. The chance of one, and only one, seems much less likely than zero. It either means the Drake equation is more positive or we are really very lucky. Let’s say you are God and can survey all the galaxies (or at least a representative sample), and the Drake equation works like the authors suggest. Then you will see approximately 30% of Galaxies with no Civs, but majority with very very many. Very few Galaxies will have just one, its hard to imagine a probability function that delivers lots of ones. So either we are very unique somehow, or we have something wrong and the paradox is back.
A galaxy with zero galaxies has no one to debate the Drake equation.
That’s not what the paper implies. The Monte Carlo simulation deals with what the parameters of the Drake equation are. There is (say) 30% chance that the product is less than one (usually much less), in which case most galaxies will be empty, and the few non-empty ones will usually have one civilization. There is 70% chance (not taking into account that our galaxy seems to have only one civ) that the product is more than one (usually much more), in which case most galaxies will contain lots of civs (unless the first one takes over).
It’s a bit complicated as it’s about probabilities of probabilities — not only there is uncertainty about whether certain steps happen on a certain planet, but there is also uncertainty about the probability of the steps happening.
First, although I’m not an astrophysicist (I wanted to be as an undergraduate, but graduate school and my postdoc took me in a different direction), in my experience people who talk about things like the Fermi paradox too much are kind of on the fringe of anything that most researchers consider a worthwhile use of time. And I think, in a less formal way, this *is* essentially what everyone thinks.
Second of all, I’ve always felt that something kind of like this is the resolution to the St. Petersburg paradox. The expectation value of your return is infinite, but that’s not necessarily the correct metric to use when you make a judgement on how much to pay to enter the game. And I don’t think it’s necessary to resort to utility functions, either. For example, the expectation value of the number of coin flips is 2, and the associated return is only $4–which is probably roughly the mental math that someone would do when deciding how much to pay to enter.
To me the whole point of the Fermi Paradox/looking for aliens is how much time it would save us on research. Just copy what’s already known in a few years instead of taking thousands of years to invent it yourself.
Like if you’re a Hawaiian Islander in 1700 are you better off inventing iron yourself or trying to figure out if a place like London exists and heading over there to just learn from the best blacksmiths on the planet?
Although if the year was 3000 BC it’d be a less strong (but still strong) argument. If it was 2010 and you’re an Andaman Islander (uncontacted tribe near India with the tech of ~40,000 BC) then it makes even more sense. We don’t even know what year we’re in (in this analogy).
We can probably even learn some things from what we know already about the Fermi Paradox. Like, it seems Dyson Spheres aren’t a thing, that has all sorts of implications for the future of the energy industry.
An aside, but probably of interest: for me the Fermi paradox was mostly dissolved a while ago when I learned/realized that we’re basically blind when it comes to detecting intelligent life.
See: Detectability of interstellar messages – Physics Stackexchange
and: If there were intelligent life in another galaxy, would we notice? – Physics Stackexchange
Key points: if another civilization tried to send a signal as powerful as the most powerful signals that we can send into space, directly at us, and we were looking in exactly the right direction, at the right frequency, we will only be able to detect these signals out to 1000 light years with the next generation of telescopes that we’re building. One such project aims to survey 1 million stars over 10 years.
Some ballparking numbers to put the 1000 lights years, 1 million stars numbers in perspective:
Alpha Centauri: > 4 light years (ly)
Milky Way: 100,000 ly diameter, 2000 ly thick
Milky Way: 250 billion +- 150 billion stars
In other words, we will only search 0.0004% of the stars in the galaxy.
So our current/next generation of telescopes is starting to be able to seriously detect these signals (again, if they’re aimed at us, at exactly the right time), but we still will only be able to see a small fraction of the galaxy properly. Nonetheless, it’s an extremely exciting time to be alive.
The more general radio noise of our civilization wouldn’t be detectable by us even over 1 light year away ie. we wouldn’t see ourselves from the nearest star.
I know a lot of the Fermi paradox is also based on not just detecting signals, but asking why aliens/probes haven’t reached us and also assuming alien civilizations can send signals that are several orders of magnitude stronger than ours and this gets more into the (still interesting, but deviating from what I wanted to share) speculative realm. And I also know that this isn’t the only way one might infer the existence of other intelligent civilizations. But I ignore these avenues in the interest of sharing what we can say right now about this one aspect of the Fermi paradox, with hard numbers.
tl;dr Our current level of technology can only detect relatively nearby direct signals from other civilizations in our galaxy. It’s just a very difficult task. Our current and next generation of telescopes will be significant improvements in this search, but still will only be able to detect signals from a fraction of the galaxy.
Now I have to revise my estimate of my own intelligence downward for not having thought of this myself.