I.
The year 1969 comes up to you and asks what sort of marvels you’ve got all the way in 2014.
You explain that cameras, which 1969 knows as bulky boxes full of film that takes several days to get developed in dark rooms, are now instant affairs of point-click-send-to-friend that are also much higher quality. Also they can take video.
Music used to be big expensive records, and now you can fit 3,000 songs on an iPod and get them all for free if you know how to pirate or scrape the audio off of YouTube.
Television not only has gone HDTV and plasma-screen, but your choices have gone from “whatever’s on now” and “whatever is in theaters” all the way to “nearly every show or movie that has ever been filmed, whenever you want it”.
Computers have gone from structures filling entire rooms with a few Kb memory and a punchcard-based interface, to small enough to carry in one hand with a few Tb memory and a touchscreen-based interface. And they now have peripherals like printers, mice, scanners, and flash drives.
Lasers have gone from only working in special cryogenic chambers to working at room temperature to fitting in your pocket to being ubiquitious in things as basic as supermarket checkout counters.
Telephones have gone from rotary-dial wire-connected phones that still sometimes connected to switchboards, to cell phones that fit in a pocket. But even better is bypassing them entirely and making video calls with anyone anywhere in the world for free.
Robots now vacuum houses, mow lawns, clean office buildings, perform surgery, participate in disaster relief efforts, and drive cars better than humans. Occasionally if you are a bad person a robot will swoop down out of the sky and kill you.
For better or worse, video games now exist.
Medicine has gained CAT scans, PET scans, MRIs, lithotripsy, liposuction, laser surgery, robot surgery, and telesurgery. Vaccines for pneumonia, meningitis, hepatitis, HPV, and chickenpox. Ceftriaxone, furosemide, clozapine, risperidone, fluoxetine, ondansetron, omeprazole, naloxone, suboxone, mefloquine, – and for that matter Viagra. Artificial hearts, artificial livers, artificial cochleae, and artificial legs so good that their users can compete in the Olympics. People with artificial eyes can only identify vague shapes at best, but they’re getting better every year.
World population has tripled, in large part due to new agricultural advantages. Catastrophic disasters have become much rarer, in large part due to architectural advances and satellites that can watch the weather from space.
We have a box which you can type something into and it will tell you everything anyone has ever written relevant to your query.
We have a place where you can log into from anywhere in the world and get access to approximately all human knowledge, from the scores of every game in the 1956 Roller Hockey World Cup to 85 different side effects of an obsolete antipsychotic medication. It is all searchable instantaneously. Its main problem is that people try to add so much information to it that its (volunteer) staff are constantly busy deleting information that might be extraneous.
We have the ability to translate nearly major human language to any other major human language instantaneously at no cost with relatively high accuracy.
We have navigation technology that over fifty years has gone from “map and compass” to “you can say the name of your destination and a small box will tell you step by step which way you should be going”.
We have the aforementioned camera, TV, music, videophone, video games, search engine, encyclopedia, universal translator, and navigation system all bundled together into a small black rectangle that fits in your pockets, responds to your spoken natural-language commands, and costs so little that Ethiopian subsistence farmers routinely use them to sell their cows.
But, you tell 1969, we have something more astonishing still. Something even more unimaginable.
“We have,” you say, “people who believe technology has stalled over the past forty-five years.”
1969’s head explodes.
II.
It’s the anniversary of the moon landing, which means I have to deal with people passing around memes like this:
But I probably can’t blame the date for the recent discussion here of whether technological progress halted in 1972.
So I would like to take a moment to critique a certain strain of futurology.
There seems to be this thing where people imagine something that would look really cool, and predict that if we work hard on it for fifty years, we’ll be able to pull it off. And then fifty years later, when barely any work has been done on it at all, they start looking for someone to blame.
Missions to Mars. Lunar colonies. Giant floating solar power satellites. Undersea domes. Ten mile high arcologies. Humanoid robots.
Whereas real technology doesn’t advance by heading in the direction of something that looks cool, unless some government or tycoon is throwing lots of money in the direction of coolness. Real technology hill-climbs towards things that are useful and profitable.
Why haven’t we colonized space yet? For the same reason we haven’t colonized Antarctica. It’s very cold and not a lot of fun and if you go outside you die.
In fact, Antarctica is preferable to space in pretty much every way. There is no reason to colonize space before you have finished colonizing Antarctica. And there is no reason to colonize Antarctica until you have finished colonizing Nebraska (population: 9 people per square km).
I will maintain that even if we had enough space flight technology that elementary school classes routinely took field trips to Mars, Mars would end up with two or three scientific bases, a resort where tourists could take their pictures on Olympus Mons, a compound of very dedicated libertarians, and nothing else. No domed cities. No colonies fighting for independence. Think that’s implausible? School children take field trips to the Mojave Desert all the time, and it pretty much looks like that. Why should Mars prosper more than a much more habitable comparison area?
Likewise, the reason we don’t build undersea domes isn’t because we’re not good enough. It’s because humans breath better on land, and there’s still a lot of land left to live in. On the rare occasion we want a resource located underwater, we build an oil rig on top of it and pump it from the surface, ie the part of the ocean where you don’t get insta-crushed by ten atmospheres of pressure if something goes wrong.
And the reason there are no ten-mile-high arcologies is that we haven’t already tiled all the desirable real estate with 9.9-mile-high arcologies and decided we still need more space.
Science fiction authors and would-be prophets stubbornly refuse to admit “would anybody reasonably pay money for this?” into their calculations. And so every ten years they end up predicting the “smart house”. Where from your phone, you can control the lights in any room of the house! I imagine futurologists sitting in their kitchens, thinking “Oh no! I wish the lights were on in my bedroom, but all I have is my phone!” Maybe one day we will have houses that contain teleporters that can bring to any other building in the world without stepping outside. But if you’re in the kitchen and you want the light on in the teleporter room, you’ll still just walk to the teleporter room and flip the @#$%ing switch.
I am not defending this as a normative view of how progress should work. There is a lot to be said for colonizing Mars as a survival strategy in case something unexpected happens to Earth. And there’s also a lot to be said for Manhattan Project style efforts to discover a technology in a non-hill-climbing way, something where there’s not a profitable transitional form at each step between where we are and what we want. But I would suggest we stick to those criticisms, and not to a criticism of advance per se.
(actually, we’re not even all that bad at getting past the hill-climbing thing; government subsidies to solar seem to have been a very successful attempt to push solar out of an area where it wasn’t profitable to improve into an area where it is)
But it’s going to take some pretty creative accounting to make moon shots profitable. The main reason people funded the moon landing in 1969 (as opposed to the reason that people not involved in funding felt good about it) was to beat Russia and then get to rub it in their face forever. Nowadays that’s no longer so fun (although rapidly becoming funner!) Therefore, we get the expected outcome of fewer moon shots until someone else thinks of a compelling incentive to go to the moon. So far there isn’t one. There’s no need to bring technological stagnation into the picture.
On space, specifically, I’m pretty unconvinced. I think that a lot of the issue is that nuclear power suddenly disappeared as a viable option for space lift. Project Orion was originally intended to launch from the earth’s surface under nuclear power. Nuclear thermal rockets also are potentially practical there.
I’d say a good part of the problem is actually capitalism. A Mars colony (*not* the moon: your scenario applies fairly well to the moon, since self-sufficiency there is impractical, but not to Mars) would have strong benefits as a place to expand, a place to manufacture space equipment (Mars is earth-like, but has less gravity so space lift is vastly easier), and potentially a place that can be mercilessly strip-mined because there isn’t a pre-existing biosphere. Under capitalism, there’s not much of a way for those who put forward the money to collect.
Another part of the problem seems to be a sort of risk-adverseness.
High launch costs, while indicative of general failure in US government innovation, doesn’t answer Scott’s question of what economic motive anyone would have to build Mars colonies. A “place to expand” begs the question, precision manufacturing is crazy difficult in a windy dust-filled near vacuum (try doing it at Burning Man, a mere two hours away from civilization), and mining is vastly easier on asteroids with fully automated equipment.
True self sufficiency is extraordinarily difficult because supply chains run thousands of layers deep, and no one even knows where the bottom is because no one can spare the effort to dig that far (http://en.wikisource.org/wiki/I,_Pencil). North Korea has an absolute dictatorship and a state ideology of self-sufficiency (and aren’t in a near-vacuum on Mars!), and even they’ve found it impossible.
>High launch costs, while indicative of general failure in US government innovation, doesn’t answer Scott’s question of what economic motive anyone would have to build Mars colonies.
The obvious economic motive is freedom. Restrictions on nuclear technologies are onerous enough, and nuclear technologies are important enough, that it could literally be worth it to go to another planet just so you don’t have to deal with regulations on nuclear technologies.
But of course there is far more than that. You can crash metal rich asteroids into Mars and use them, without having to deal with whatever you’d have to deal with to do that on Earth. That’s a huge game changer – right now essentially all the good shit we have reachable on Earth is those asteroids that happened to crash into it at the right time. Everything else is stuck, having sunk to the core while the Earth was molten.
Off-topic reply, just read that link.
That logic can be extended to almost any object!!! But seriously, we really do know close to diddly squat about what’s happening around us.
The word is “risk-aversion”.
Ten-thousand monopoly bucks to this person!
The problem isn’t really the lack of potential reward. The thing about Mars is that it doesn’t have NIMBYs, even of the “on the next continent” kind. So the potential rewards are massive.
But the risks are also ridiculous, just insane in fact. And even though the pot.risk/pot.reward calculation might wobble out in favour of doing it, risk bias wins.
This post isn’t convincing regarding the general “technological innovation hasn’t stopped” thesis. The points in section 2, while entirely correct and not pointed out nearly often enough, are very weak examples of “we should have technology X and we don’t”. Did anyone expect ten-mile arcologies by 2014? http://en.wikipedia.org/wiki/Arcology seems to be mostly sci-fi references, not serious plans which were then canceled, like nuclear power for everybody or BART to Palo Alto.
If you look at, eg., the technologies you yourself cite as products of government innovation:
“Advances invented either solely or partly by government institutions include, as mentioned before, the computer, mouse, Internet, digital camera, and email. Not to mention radar, the jet engine, satellites, fiber optics, artificial limbs, and nuclear energy. (…) Even those inventions that come from corporations often come not from startups exposed to the free market, but from de facto state-owned monopolies. For example, during its fifty years as a state-sanctioned monopoly, the infamous Ma Bell invented (via its Bell Labs division) transistors, modern cryptography, solar cells, the laser, the C programming language, and mobile phones…” – “Competence of Government”
every one was invented between roughly 1930 and 1980. And of the new technologies cited in this post, almost all of them (minus some medical research and a few other exceptions) were the results of private innovation. Technological progress overall is hard to measure, but it seems extremely difficult to argue that there hasn’t been a big dropoff in US government innovation since the time of the moon landings. Especially adjusted for number of dollars spent. And since many techs (eg. nuclear reactors, transportation, civil engineering) inherently require lots of government involvement, this had obvious negative consequences for those sectors.
Of course government innovation has dropped off, because large numbers of voters and politicians – under the influence of false economic theories – have demanded “small government” and have systematically demanded the reduction of all non-defense spending.
Sure they have, and what has been the actual trend of non-defense spending since 1980?
Notice I did not say that the demands were successful. But the kind of spending that can most easily be justified in an environment hostile to government spending in general is not likely to be innovative.
For example, it’s probably easier to sell increased NIH funding on the grounds of medical research being an obviously useful thing, whereas non-NIH research funding appears to have remained flat at best. (And at something under 80 dollars per capita, would be barely visible on your graph.)
I’d even argue that Nasa deliberately ruined all its useful projects to save space exploration politically.
@Kalifornen: I am curious now, what other projects do you have in mind when you say that?
Well, so you know about the nuclear third stage for Saturn 5?
Outside of the most popularized projects, I know very little, even less so of technical details and scrapped projects and alternatives.
Basically, there was to be a nuclear-rocket third stage for the Saturn 5. This technology was very close to flight hardware, plenty of test stand time, but never flew due to the simultaneous fear of nuclear power and collapse of space ambition.
This of course was a step down from Project Orion, which at its greatest extent was to be an immense warship with nuclear missiles, guns, and two dropships.
The public does not know how much NASA costs (and commonly thinks it takes over ten percent of the US budget, while it is actually less than half a percent).
As a result, it is beneficial to NASA to make all of their projects look minor, unimpressive, and short-term practically useful.
You can’t just blindly cite non-defense spending. You need to actually talk about R&D spending specifically.
Personally, I see a lot of technological progress that has been made, but also a lot of ways in which:
A) The “intellectual property” system has kept a lot of inventions under corporate lock-and-key, and
B) The State hasn’t done enough to speed the path from the first peer-reviewed research publication about X to an implementation of X usable for engineering and experimentation to an eventual marketplace for X-based products. The State needs to act here, because that path is only going to get longer as science picks the low-hanging research fruit.
I will yell to my friend to flip the light switch if they’re closer. So I think if we ever get working voice recognition I’d pay for the smart house.
“Smart house” ideas make more sense the more functionality you can add. If all you can do is turn off the lights with your phone, its just a more complicated way of doing that compared to flipping a switch. But if you can open/close windows, adjust HVAC, dim/brighten the lights, access your home security system, etc., then you’ll use it enough that it becomes second nature.
All of these features are computer-related so they should get cheaper over time. The problem would be coordinating all of the features with one control system.
As I understand it, this is what Google is trying to do with Nest.
Would you like a beta reader?
« We have the ability to translate nearly any human language to any other human language instantaneously at no cost with relatively high accuracy. »
No, we don’t.
What we do have is the ability to translate nearly any word from a select choice of common languages, and statistical methods (which are indeed getting better) that attempt to efficiently translate sentences made up of those words, with varying levels of accuracy.
It is a step up from the paper and dictionary method, but we’re nowhere near “universal translator” yet.
Typhon
I think you’re working with different definitions of “accuracy”. We’re nowhere near the ideal of “write what I would have written if I knew [language X] and had decided to work in that language”, but you can usually understand what was meant. This might not work with uncommon languages, but it is usually good enough with common ones.
(To test this, I went to the Wikipedia homepage (the “choose a language” one, not the English one), chose the first link I saw that used a non-Latin alphabet, chose a random link there, and ran it through Google Translate. The end result is certainly not easy to read, but I can get almost 100% of the information content if I work at it.
I tried reading your link. I’m convinced the only reason I understand what it’s saying is that I roughly already know what it should say, i.e. I know how copyright works and what Wikipedia’s license and attitudes are. I wouldn’t be able to learn anything useful from it if, like intended readers, I had misconceptions about the concept of copyright and no idea about Wikipedia’s license.
As I said, the levels of accuracy vary, but even between English and French, closely related languages that have been prominent choices in research about machine translation, Google translate still makes a lot of mistakes, and is bizarrely inconsistent (one of its biggest weaknesses is polysemy).
Typhon
They’re not excellent, but they’re way way better than I expected them to ever be if you had asked me in 2000, and they’re better than a non-negligible fraction of translations written by humans.
(English and French are actually pretty distantly related.)
On the other hand, 900 years of influence. #normanyoke So it might be easier than English Dutch. (Or not. I don’t know.)
Given that humans are almost always able to work out the appropriate meaning of polysemous word from context (if they can’t then the sentence is just confusing, not poorly translated), I wonder if it might improve readability to abandon the pretense of a one-to-one translation and go ahead an present multiple possible guesses for polysemy and similar issues. Note that if you use translate.google.com for text, you can do this just by clicking on a word, but this doesn’t seem to work when translating a whole webpage.
A page where people are less likely to already know what it says.
I won’t say that I got 100% of the information, but I got quite a lot (at least until half-way through when google apparently decides to stop translating for some reason).
There are things English-speakers were not meant to know.
OK. I could write that article for en.wikipedia if it wasn’t already there, but it would be enough of a pain that I’d want to be paid something for the effort.
Seems to be having trouble with particles, but the rest of it is pretty comprehensible. I’d guess that this is because Japanese grammar is less contextual than English grammar and the translator isn’t bright enough to rederive the right linking words.
What does this mean?
I can break Google Translate with El Pais articles. That’s not high accuracy, that’s “can often but not always get the gist, or at least the topic, though maybe not reliably whether it’s positive or negative.”
I was going to make a similar comment; but emending “nearly any human language” to “nearly any human language spoken by more than N people”, for some value of N I’m not going to bother to figure out, is probably closer to true.
No; machine translations between Chinese and everything else are still shit.
While I agree with you that the stagnationist hypothesis looks doubtful, this would have been more convincing if you had compared progress over the past 45 years to progress over the previous 45. “There has been progress” does not refute “progress has slowed.”
I read a lot of that as “See, I can pick cherries too!” game-playing. Fun, but not really the core of the argument.
Eh, William Gibson has a major point: there was technological progress, but it hasn’t been evenly distributed.
Take the Internet. It’s the so-called big technological triumph of the past 45 years. We’re using it right now. Now let’s ask: how many people anywhere actually have broadband internet useful for watching Netflix, as a fraction of those living in sufficiently population-dense environments that we all know damn well it could be built if there was corporate and political will to build it? I can still remember living in a well-off suburb in Massachusetts in 2002 where there was no broadband internet whatsoever in 75% of the town’s land area, including several whole well-off burbclaves, because AT&T just wouldn’t get off their arses and build the necessary infrastructure. We had cable TV and Gamecubes, yeah, but I needed to book time on a family dial-up machine to look up game-programming tutorials.
Then when we moved again, by 2003, household wifi became reasonably common.
Or for a further example, just ask anyone trying to get decent price and selection on broadband internet in Boston or Seattle (or here in Israel, for that matter).
Contrast this with indoor plumbing or electrification, where massive state-forced projects were undertaken to ensure that everyone living in an even somewhat dense town or city damn well got access to the Big Technological Thing as civil infrastructure. What portion of modern technology’s potential is underexploited because we treat it as a luxury good instead of infrastructure?
Electrification spread rather like broadband service in its early years: the cities were quickly electrified (some before the AC/DC standards war had finished) and rural communities were only slowly added as it became cost-effective to add them. Electricity networks had the advantage that they could be created anywhere there was a power source without the need for connection to the whole network, so it was easier to electrify remote areas, but nonetheless, large parts of the country remained without electrical service until the Rural Electrification Act in 1935. Market entities will expand any service network to a point, but 100% penetration (and the network effects that it generates) requires impetus from outside.
>”Computers have gone from structures filling entire rooms with a few Kb memory and a punchcard-based interface, to small enough to carry in one hand with a few Tb memory and a touchscreen-based interface.”
1969 wouldn’t have been so surprised at the touchscreen-based interfaces… 😉
See Ivan Sutherland’s “Sketchpad” at MIT, 1963: youtube.com/watch?v=USyoT_Ha_bA
Edit: Sorry, wasn’t expecting the video to embed in the post. Let me know if that’s considered bad form… However it’s a classic and recommended watching for everyone who hasn’t seen it!
In other news, houses in Tokyo do indeed have remote-controlled lights for every room as a standard feature, based on my personal experience here. Not that that has any impact on Scott’s thesis.
But it does appear, for example, that every engineering plastic in current use was invented before 1960, although they’ve expanded in their use since then. I’ve heard it said that, basically, in 1930 all of the brilliant silicon-valley types saw the future in plastics and went on to develop the various polymers we use now, like teflon and such. These days, young graduates are primarily excited about information technology, which is why automatic translation, storage, and big-data-mining are taking off. But that technological development in many other areas has slowed a lot.
Is that even true, though? There’s some really exciting materials stuff on the horizon, to take your example of plastics–just off the top of my head, carbon nanotubes, nanomaterials and metamaterials. Once those things take off, you can bet that industry will invest heavily in creating products out of them. And that’s not counting the everyday stuff, like that goop that I just found out has the delightful name of fugitive glue.
Kevlar is a trivial but not substantive refutation (invented in 1965). And there are some other very high performance polymers that are more recent than that (e.g. polybenzoxazole). You don’t hear much about them because they’re not in wide use; as you note, usage expands over time.
The more subtle refutation is that synthesizing some sort of polymer chain with a given backbone is only one small part of applied polymer research (which is part of why polymers take time to come into wide use). Even in terms of pure chemistry, there’s a great deal of difference between “yep, that’s a long chain” and actually controlling the molecular structure. And there’s the processing technology. And the additives. And of course there’s the economics of all that.
Reading this reminds me of Eliezer Yudkowsky complaining that houses in America don’t have roller shutters on their windows; they are ubiquitous here in Italy.
(P.S.: BTW, why the hell does Wikipedia spend all those words on how roller shutters protect from burglary and hail? The real reason they’re so awesome is they protect from light, so that the sun doesn’t wake you up every morning in June at freaking 5 a.m.)
I like this case, but one thing stands out as particularly anachronistic to me:
Engelbart built a wheel mouse in 1965, and Telefunken independently invented (AFAIK) the Rollkugel, a ball mouse, in 1968.
I’m fairly sure some form of printer also existed. The point is that practically nobody used mice in 1969 (even among those who did use computers), and now very few people get through a whole day of computing without touching the mouse.
Peripheral printers didn’t seem to have been developed yet, according to my cursory research. It was either integrated printers on dedicated systems, or peripheral plotters, which were something like remote-controlled mechanical pens, the sort of thing you’d use for graphs and other linework rather than images.
Relative rarity of use feels like a tangent: If Scott says we have artificial eyes today, I’m going to say we had mice in 1969.
The issue isn’t whether we had mice, it’s whether we had a billion mice. As far as I can tell, at least in Europe and if you count the ones in landfill, computer mice outnumber the kind that have tails and squeak.
That something existed in three research laboratories in Boston in 1969 doesn’t make it unexciting that it exists in every third household in Bolivia in 2014.
It would be like complaining, when the space elevator opens in 2054, that the fullerene cables aren’t a novel technology because graphene was available in square-millimetre quantities by a protocol involving sticky tape in 2004.
We don’t have a billion artificial eyes, either. The OP appears to be inclusive of recent developments and rare prototypes, and consistency implies that the mouse should therefore be dropped from post-1969 inventions.
Well, the first artificial eye experiment actually also dates to 1968 (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1351724/), so maybe that’s not the best comparison. 🙂
Printers are older than computers. As far as I can tell, printers that can print pictures (as opposed to text or graphs) were invented in the 1960s.
What about the fax machine? Doesn’t that count as a printer? As we all know, it was widely deployed in the 19th century.
If Douglas Engelbart’s amazingly forward-thinking demo in 1968 means that there’s been no substantive advance in HCI since then, then De rerum natura means that there’s been no substantive advances in physics since the days of Epicurus or at least Lucretius.
Well to be fair, the man did describe brownian motion in the first century BC. You’re obviously right to say that chemistry / physics have advanced beyond that now, but it did take somewhere in the neighborhood of eighteen centuries to catch up.
The fact that we’ve only recently started to show up the Romans (and are still lagging in some areas, i.e. cement) should really tell us something about how scientific and technological knowledge expands and contracts. It’s a bit cliche but the comparison to the late days of Rome might very well be an instructive one here; innovation as such didn’t really stop, the byzantines were still doing impressive and novel things well into the crusades, but what technological skill there was retreated to fortified enclaves as most of the empire was tearing up their roads and monuments to build castle walls.
Science fiction authors and would-be prophets stubbornly refuse to admit “would anybody reasonably pay money for this?” into their calculations. And so every ten years they end up predicting the “smart house”. Where from your phone, you can control the lights in any room of the house! I imagine futurologists sitting in their kitchens, thinking “Oh no! I wish the lights were on in my bedroom, but all I have is my phone!” Maybe one day we will have houses that contain teleporters that can bring to any other building in the world without stepping outside. But if you’re in the kitchen and you want the light on in the teleporter room, you’ll still just walk to the teleporter room and flip the @#$%ing switch.
It sounds like you haven’t heard of Ubicomp, which is totally real and not the name of an AI villain from Dilbert. Ubicomp is a very active research program in HCI (people develop ubicomp technologies and publish lots of ubicomp papers and have ubicomp-related academic conferences and so forth). That whole “smart house” thing? That’s a big area of interest in ubicomp. Of course, the idea isn’t so much that you control the lights from your phone (that would be silly), but that the lights in your bedroom turn on automatically when you decide to walk to the bedroom, via various sensing technology + algorithms that do inference from that sensor data.
If there is stagnation, it’s in energy production. People imagined that nuclear power was going to lead to an even more powerful technologies. The term you sometimes hear thrown around is that electricity would be “too cheap to meter”. People imagined this would happen similarly to the Tennessee Valley Project, but with even cheaper and more abundant energy, and for the entire country/world.
That is very true – with one GIANT exception. Lithium-ion batteries are amazing, and have tossed micro battery-powered aircraft into the realm of practicality. There’s a lot of improvements in efficiency, and cheap solar panels are NICE especially if you want to be self-sufficient or live in a remote location, but there’s no high-end improvement.
And yet they can’t even power my smartphone for 24 hours of moderate usage!
Aren’t photovoltaic cells far, far cheaper than they were in the 1970s? That should count for something. (Learning curves on wind energy are also hardly stagnant, but not nearly as dramatic as those for solar.)
Also, here’s a really cool report on the possibility of making geothermal plants nearly anywhere, which would be pretty amazing; there are a variety of pilot projects going on now.
There’s a lot of “smart house” stuff available; it’s just not mainstream enough to come from IKEA yet so you have to think hard about it, and maybe do some wiring and stuff. There are problems like “software development cycles are weeks long but house-improvement project cycles are decades long” (so if I install a thing to control my lights from my phone today the software might be obsolete by next year, but do I *really* want to rewire my house next year to be compatible with the new software?). Also there are problems like “anything on a network can be hacked”…
And it costs a lot of money, and right now not enough people are willing to pay the “lot of money”. I guess if enough people paid then it would become cheaper and eventually everyone would have it. But the advantages of turning on your lights from work are probably outweighed by the disadvantages of having your annoying neighbour skriptkiddie make your lights flicker all night…
The development cycle issue is mostly a problem with the way we build houses, though; when all your wires are bundled up together and secured behind 2x4s and plasterboard (if you have a house designed to blow away in the wind), or actual brick and plaster (if someone intended the building to last) working on your communication and power systems is a pain. Installing a new power socket requires a masonry chisel and re-plastering skills, not to mention replacing the wallpaper, so nobody wants to adjust the system more often than absolutely necessary.
Compare that to the much more frequently updated office-type situations where everything is behind either trunking or very light false ceilings, and the upgrade process is effectively tool free aside from possibly a ladder. I anticipate ‘smart home’ technologies becoming much more common as new houses are built, but with a fairly low incidence of retrofitting, much as people who build their own homes now are not infrequently including Cat6 cabling.
As to the ‘things on networks can be hacked’ – go back to wires for anything that doesn’t have to move. Wifi-enabled light bulbs are a security risk, whereas ethernet-enabled light sockets might actually make sense. I can certainly see a use case for turning the lights on when someone enters the room if the ambient light is below a certain level, and off if the room is unoccupied for a certain period.
(For ‘ethernet’ read ‘whatever fast networking cable is most common at the time’ – I’m not betting against this being optical fibre in the next decade)
Or just use Ethernet over power lines – it’s solid working tech.
An additional potential problem is that the improvement-cycle of rental properties can be indefinite. Homeownership rate among adults aged 25-44 has been declining since at least 1980. In current market conditions, many young professionals may never be able to afford to own a home in cities near where we can be employed. So we compete in a rental market for apartments that will not be updated until they are condemned, bulldozed, and rebuilt, because the owners have no financial incentive to do otherwise.
Smart house technology is better and more useful than people seem to be giving it credit for.
The hue light system let’s you turn lights off and on from your phone as well as adjust colors. There are a lot of use cases for this:
-The best use (IMO) is to f.lux your real life environment. About 2.5 hours prior to your target bedtime you have the lights gradually shift to red and away from blue so that in the hour and a half before bed you don’t encounter any blue light. You can also have the lights gradually dim in the hour befor you go to sleep. Combined these produce really high quality sleep and a good enforcent of your chosen schedule in an automated way that can all be doen with alarms without your needed to press anythjng once set up.
-Sunlight colored light fading in in the AM is a great way to wake up
– having all your lights turn on when you arrive and off when you leave with geofencing when combined with alarms for morning evening means you pretty much never need to hit a light switch. Until you get to stop doing it, you don’t notice how much needing to hit switches impacts your pathing and how much cognitive load it is to allways turn them on or off even when you have a habit.
– as a minor point, you have a bit more freedom to arrange furniture when you don’t have to worry about not blocking light switches.
I’m always baffled when I read posts like this. All I can suggest is that if you were as skeptical about all those things you mentioned in Part 1 as you are about Heartmath, you would come to a different conclusion about our technological progress.
No one can argue that technological growth has stopped since the 1970s, but it seems impossible to deny that it has been slowing down significantly. And if nothing else, there’s a relationship between population growth and technological progress that points towards a grim future once the globe has passed through demographic transition.
Most of the low-hanging fruit has already been picked, and we are now left with diminishing returns on ever greater investments of time and resources. That’s why improvements in such things as life expectancy are incremental, despite the massive investment in health and medicine made over the last several decades.
As for space, we will of course never have such ludicrous things as colonies on Mars. Such usage of resources is much too expensive, dangerous and most of all, useless.
In fact, here’s a prediction: no human will ever again, in our lifetime at least, go beyond Earth orbit.
The Chinese might go to the moon just to prove that they can. It would be a nice symbolic project for them to prove that the Mandate of Heaven has passed to them from the US. Other than that, though, no one is going much above low earth orbit.
I pretty much agree with everything that you’ve said here, with the obvious exception of computing, which I think has unambiguously progressed greatly in the past 45 years, and in such a dramatic manner that it’s covering up for the stagnation in most other fields.
“it seems impossible to deny that it has been slowing down significantly”
Just goes to show how subjective all this is. I’ve always felt I live in a time of unprecedentedly rapid technological change, and consequent social change. I sometimes boggle at how different my adult life would have been without the internet etc, and how different my kids’ childhood and adolescence will be from mine.
I’m not sure “the internet” belongs in the technology category. There’s certainly a lot of technology that facilitates broadband, display, flash, etc., but the actual connecting of computers doesn’t really depend on anything but the existence of computers themselves, a telephone infrastructure, and modems.
Someone will definitely go beyond earth orbit again. The world was not made to be viewed through the lens of common sense.
Completely disagreed. I have a friend who works as a toxicology grad-student but is actually a closeted wannabe anti-ageing researcher. You know why he’s a closeted wannabe? Because our funding agencies just don’t consider ageing a disease worthy of treatment or cure.
We’ve done a massively good job at treating diseases of infancy, and we’re currently pouring all our health resources into treating diseases of bad lifestyle. What simply hasn’t been tried is just fixing the damn lifestyles (Japan and France get some credit for this, but basically nobody else does) and pouring our research resources into kicking the arse of degenerative disease and other ageing-associated issues.
And to me this is pretty fucking sad, since by refusing to fight degenerative disease aggressively we condemn people to the hell described by Scott in “Who by Very Slow Decay”, and for no crime greater than staying alive long enough.
So the reason we don’t have flying cars, humanoid robots or a cure for the common cold is that these things wouldn’t be profitable? Come on, Scott.
All those are exceptionally hard to do
I agree that it’s hard. Colonizing Mars is also hard, but Scott says it hasn’t happened because it’s unprofitable, not because its hard.
Profitability is a function of difficulty.
I will say you are spot on with the humanoid robots and more right than you think about the other two.
Humanoid robots are much cooler than they are useful. Almost all useful tasks we imagine humanoid robots doing can be done much better by other forms of automation. For example, car-building robots work better than humanoid robots do by not being humanoid; car-driving robots are better integrated with the car than produced in the form of a humanoid in the driving seat. Fast food automation takes the form of computer systems on the cash register and very good coordination between people doing tasks with different machines. We have a couple of not-so-bad humanoid robots, but they’re usually much more expensive than humans and not as good as humans at things. Just as there’s no reason to colonize space before we colonize Nebraska, there’s no reason to employ humanoid robots until we stop having humans willing to work for minimum wage, often assisted by automation that handles the vast majority of the task.
(there is a little bit of hill-climbing problem here in that maybe if we had thirty years free unprofitable research into humanoid robots we’d have some that would be cheaper than humans in some tasks, but even then I’m not sure)
There have been occasional crackpots with flying cars for decades. The problem is that due to legal restrictions you need to have a pilot license for them and get the flight path cleared by the FAA and stuff. as such, no one wants to buy them and we can’t hill-climb to ones without those problems. My guess is that we will eventually get flying cars through a roundabout route that includes combining pre-existing quadcopter and self-driving car technology – but not until those two techs are perfected.
Curing the common cold is just hard. But we didn’t cure that many diseases in the past either. Our common-cold fighting ability is increasing gradually and unspectacularly.
That’s not something to count on either. It must not be, in any case. Certainly the cost went up. But society can become richer and more able to run projects outside the bounds of decadent governments.
There are a lot of huge advantages to a Mars colony that don’t work for Nebraska, but they are all of the hard-to-profit-from variety.
One reason why curing the common cold is hard is that curing many things involves 1) treatments with side effects, and/or 2) curing the main problem but allowing the body’s natural healing to go the rest of the way.
The common cold is too mild a disease to be cured this way; any side effect would probably be worse than the cold, and it *already* is cured by allowing the body’s natural healing to take effect. If we could give someone a pill for pneumonia and say “take this pill, you’ll be fine in a few days, you don’t need to come to the hospital, and you can resume normal activity immediately as long as you make sure nobody else can catch it”, we’d think it was wonderful. For a cold that would be useless.
For what it’s worth I program humanoid robots for a living, and there *is* a market for Humanoid robots, though I agree that in terms of usefulness, non-humanoid machines are better 99% of the time.
Yes. People have been making ‘flying cars’ for ages. You want a plane that can drive on the highway? You can go out and buy a kit right now. But that would be idiotic because they’re horribly expensive and it turns out a plane which can drive on the highway is both a bad car and a bad plane. Also, you need to learn how to fly, and, incidentally, small-plane aviation is the riskiest kind of aviation. Humanoid robots likewise serve no real purpose; the Japanese have been throwing money down that rathole for decades now, and is anyone beating down the doors for Asimo? Nope. As Scott says, why would you want a horribly expensive, fragile, limited humanoid robot when you can hire a Central American off the books for $5 an hour? (Now, an arm robot for precision manufacturing who costs a lot less than a skilled American – now that’s a different story.) And so the Japanese humanoid robots are seeing their lunches eaten by non-humanoid robots invented by Israelis and Americans etc. (What robots and drones were sent into Fukushima? Not humanoid ones.) I don’t know enough about the common cold to say.
Exactly like with the Ancient Greek attempts at a steam engine; as the story goes, they laughed at the early steam-driven toys and said that slaves could do anything better. No doubt someone also protested that mechanization would leave slaves with nothing to do.
This is the *other* kind of a contradiction between productive forces and the relations of production; Marxists usually talk about how the growth of the former leads to an explosive change in the latter, but here we see the existing order successfully stifling innovation.
I don’t think ancient Greeks would be bothered by a machine that put slaves out of work.
I don’t understand your point. The ancient Greek steam engines didn’t do anything at all usefully but emit steam and spin. It took 2+ millenniums and access to ultra-cheap coal before they were useful for anything at all (such as pumping out mines). Similarly for flying cars. They’re cars. They fly. You can use them, so we have flying cars. It’s just they don’t do anything worthwhile compared to regular cars and commercial aviation despite the romantic images we have of them.
Or is this some sort of no true Scotsman argument, where the proponent argues for any technology that is not economical or worthwhile, ‘ah, but you don’t really have that technology, because look, it’s not worthwhile!’ In which case, I suppose humanity does not have the technology to go to the moon in the same way we do not have the technology for flying cars…
Uh, good point about coal, but the general point is that the pressure wasn’t there on the Greeks to do something more with what they had; something profitable + sufficiently scarce labour power seems to push things ahead. Like the connection that many historians seem to draw between the Black Death and the beginnings of the Industrial Revolution. Although that also appears to be in debate.
The ancient Greeks loved mechanization. The horse-driven Archimedes screw wells they built in North Africa were still in use in the twentieth century.
Hero was an archeologist building toys based on the books of a centuries dead civilization. It’s true that his sponsors recorded one line laughing at the idea of saving slave labor, but just one.
@gwern
But why were the engines so useless? Were there some technological advances that were needed? Or was the lack of useful engines an effect, rather than a cause, of lack of interest?
Re: Steam engines needing coal to be useful.
You can fuel steam engines with wood as well as coal. In places that had plenty of wood steam trains and other steam engines ran on wood, not coal. They used coal in England because wood is expensive in England, just like the English had to learn to make coke before they could make steel on a large scale because they didn’t have enough wood to produce charcoal.
> What robots and drones were sent into Fukushima? Not humanoid ones
Funny you should mention that, seeing as DARPA’s latest grand robotics challenge was for humanoid robots that could navigate Fukushima. They weren’t entirely satisfied with the 2014 crop, so they’re running it again in 2015.
What possible use would here for humanoid robots? My understanding is that the humanoid shape evolved to allow for endurance running. And wheeled vehicles do that even better. I suppose there are some jobs that a humanoid robot would be well suited for, but look at the sort of jobs that robots are depicted in fiction as doing: pushing around a vacuum machine, driving a car, typing at a typewriter. All of these jobs simply consist of controlling other machines. How in the world does it make sense to build a brain to control a machine, and then not only not put that brain in the machine, but have the brain interface with the machine with human-like arms? Why build a robot to push a vacuum cleaner around, rather than put motors in the vacuum cleaner itself? Why build a robot to drive a car (that is, build a computer, have the computer control human-like arms, have the arms control a steering wheel, and have the steering wheel control the car), rather than putting a computer in the car and have the computer control the car directly? That’s like having a one robot read off a sequence of characters into a phone, and another robot on the other end of the line type those characters, rather than using a modem.
Using machines to imitate humans is, for the most part, a step backwards. We humans don’t use steering wheels because they’re the best way of controlling a car, we use them because we aren’t capable of generating a modulated electric current. If we could have our thoughts directly transmitted to the car’s onboard computers and control the car without a steering wheel, that would be a major advancement.
or a cure for the common cold
I look forward to the day when that’s no longer a go-to example. (Seriously, watch it. We can at least keep rats from getting the flu in a safe way, and it looks like the same technique would work on the common cold, ebola, or dengue fever for that matter, even after the disease hits. Here’s the original paper.)
I think there’s an underlying problem f judging by standards of people who weren’t aware of the difficulties involved in a project.
Nitpick: I think lasers have always worked at room temperature, and fit in (large) pockets. The first laser was the length of a pen.
I think the decade of the 1960s really gets to lay claim to the most significant advances in laser technology…
Aside from Maiman’s ruby laser in 1960, the helium-neon laser was invented in the same year at Bell Labs (still one of the most scientifically and industrially important lasers today, although not found in consumer products). Semiconductor lasers (the kind used in supermarket checkout counters) were invented in 1962 at General Electric’s labs in New York.
By 1969 they were already using lasers to measure the precise earth-moon distance, using prisms placed by the Apollo program.
Since 1969, we have made lasers cheap, small, and blue.
Do we have a way of measuring total technical progress? I can think of a few obvious proxies (number of new patents filed, etc.) but all of them have obvious flaws and are subject to gaming and external influences which cause them to be unreliable indicators.
It’s important to note, however, that a supermajority of the items on Scott’s list have to do with computing. A more even-handed assessment of the last forty years could be “stagnation on all technical fronts except computing, which has progressed so amazingly fast as to render the lack of progress in other fields irrelevant”. Even the progress items which weren’t directly related to computing, such as new medicines, have nonetheless been enabled by computing in a pretty obvious way. It’s unclear whether this should count as progress on all fronts or not.
I own a chormecast. This is a device that mostly lets me play netflix on my TV from my phone. I got it because it was more convenient than playing netflix off of my blu-ray player.
I would pay $50 for a device that let me control the lights in my house from my smart phone, the light switches I have are not nearly convenient enough.
I have been hoping for this post for a very long time, since the first time I got in an argument with Jim over this. Scott, I love you so so so much. Like, in a platonic way.
But SPACE! The final frontier!
That’s pretty serious — and true.
Although a priori there might be no reason to doubt that the grapes are sour, it does look kind of suspicious if the fox is continuing to jump for them while making the claim.
http://images.dailytech.com/nimage/NASA_Budget_Inflation_Adjusted_2010_Wide.png
NASA appears to have had about half of the Apollo-era budgeting rate for four times the Apollo-era duration. If the amount of progress post-Apollo doesn’t seem to be double the amount pre-Apollo, it’s not because we wisely decided to focus our efforts on Wikipedia instead.
And look what they’ve done. We’ve gotten closeups on every planet and some other things. We’ve sent objects out of the solar system. We landed on Mars multiple times with increasingly sophisticated probes (and a significant increase each time) and given Saturn a long-lasting visit, mapped a zillion stars and related objects at ultra-high resolution, and built two space stations, one of which is still running. We have shot up a comet.
Apollo… landed on the moon six times, with slight improvements to scientific capabilities each time.
I am skeptical as to the return of both cases.
What return?
It’s not precisely technology, but as a mathematician I’d like to talk about developments in math for a little while. Some of this, particularly the proof of Fermat’s Last Theorem, I think does belong on your list.
Here’s some things we’ve done since 1972:
• The first computer proof (of the four color theorem)
• Classification of the finite simple groups
• Invention of elliptic curve crptography
• Development of the entire field of quantum algorithms (which, no, does not count as an advance in computing)
• Proof of Fermat’s Last Theorem (!)
• Proof of the Poincare conjecture
And those are just the surface results, leaving aside the extremely broad and extremely deep developments which lead up to each of those points.
ETA: And like you cannot tell me that the proof of Fermat’s last theorem is not a result of a level of impressiveness on par with landing on the moon. And yeah, some of what Wiles did amounted to sitting in his attic without collaborating with people, but popular legend greatly exaggerates the extent to which that was true. He built on a massive framework developed by hundreds of people, with much of the work having been done post-1970.
Thank you, I wish I’d brought up the Fermat proof in the original argument. I remember watching a video about it in a high school math class and thinking it was mightily impressive.
Somewhat OT, but do you know of any source that describes the details of Wiles proof in a way that’s accessible to a person without a degree in mathematics, if such a thing is even possible? I’ve read several popular accounts that describe Wiles himself, the math he used, and the reaction of the math community, but nothing that gave any insight into the actual content of the proof.
Number Theory Ph.D. (in a similar field) here. The Wikipedia article isn’t bad, though probably not really a good intro for people who aren’t familiar with the background. Simon Singh has a book called “Fermat’s Enigma”; I don’t believe I’ve read it but other Singh books I’ve read have been good.
Has any practical technology of any kind resulted from the proof of Fermat’s Last Theorem? Was it of benefit to anyone other than professional mathematicians who got to read a neat proof? If not, I feel entirely justified in claiming that the moon landing was just a *little* more impressive.
While I fully admit that my field is almost totally useless (but pretty!), I’ve never read any particularly convincing case that the moon landing accomplished anything other than “being really cool” either.
The techniques developed and employed towards the proof massively advanced number theory as a field, with major spillover to other fields, especially algebraic geometry. It certainly resulted in a great deal of mathematical technology, and allowed advances in other mathematical fields.
Now, we could have a discussion about whether advancing math generally has an impact on people’s lives, but I’m kinda hoping you already agree with me that it does, eventually. If not, I can at least present the example of elliptic curve crypto, which would not have been developed without advances which were made specifically in pursuit of Fermat’s Last Thereom.
(That said, I did have a number theory professor who liked to say that “Number theory was for a long time the last pure field of mathematics, in that it had no application. Then cryptography was invented. Thus it is the job of every number theorist to break every cryptographic code, so number theory can go back to being useless.” This was very tongue in cheek and in any case false, because modern advances in number theory often have applications in other fields which themselves have applications in engineering and physics.)
The fact that the finite field points of elliptic curve are a finite abelian group is a pretty easy fact that is hard to attribute to the pursuit of FLT. It is also a fact that is a century old. Maybe the fact that elliptic curves are groups is a difficult fact, but it is two centuries old and was definitely not motivated by FLT.
It is pretty hard for me to think of any advances that were made in pursuit of FLT. Maybe 2-descents, but the causality is backwards.
Douglas: Yeah, but that fact is not sufficient for ECC to be trustworthy; it is at least my understanding that we wouldn’t have really been able to analyze cryptographic applications of elliptic curves without techniques from arithmetic algebraic geometry, many of which were at least originally motivated by FLT. For example, I know some of Frey’s work on ECC was motivated in part by attempting to link Taniyama-Shimura with FLT (which eventually became the modularity theorem, which is what Wiles proved and which implies FLT).
That said, my grasp of the history here is at best tenuous; mostly I’m going on my recollection of stories relayed by my math and CS professors.
@Anon: If the advances made in pursuit of Fermat’s Last Theorem resulted in elliptic curve crypto, I am certainly willing to concede that it was a major accomplishment. However, there have existed other approaches to public-key cryptography since 1977, and I don’t think the marginal improvement of elliptic curve crypto over RSA made much difference in the lives of ordinary people. The primary benefit of such advances would probably be yet to come–perhaps in the advent of cryptography that could be proven secure without the use of any assumptions.
No, that really is the whole of it. DSA was premised on Z/p* being hard to understand until the number field sieve was invented. This made DSA sizes explode, prompting a search for other groups, first tori and then elliptic curves. There is no positive analysis, just the last one standing.
It’s true that some curves have been ruled out by slightly more sophisticated techniques, but all methods over finite fields, vanilla algebraic geometry, nothing arithmetic.
While we’re on the matter of intersections between math and computing, modern theorem-proving assistants are an amazing piece of technology undreamed-of by yesteryear’s mathematicians, and Homotopy Type Theory is talked about as an entirely new, fundamentally computational foundation for mathematics, capable of replacing Zermelo-Fraenkel Set Theory.
And then we have Hutter’s formalization of the principles behind Universal Artificial Intelligence in 2000, which has given us no practical applications so far but has basically set the investigatory paradigm to drag an entire fundamental field of computing out of its rut.
If you look at the economy as a whole, its growth rate, and how people actually live their lives, the last 45 years, while having lots of instances of tech progress, is extremely stagnant compared to the previous 45 years.
Only if you’re being absurdly parochial about it.
If you go anywhere in China, 2014 is vastly better than 1969: you could argue that 1969 was worse than 1924 in a fair number of ways. If you go anywhere in Eurasia east of Trieste, the same applies. If you go pretty much anywhere in Latin America, ditto.
Agreed. I’m being extremely parochial, but since the US sets almost everyone’s vision of how to live, win, be successful, whatever, that provinciality is appropriate in many respects.
Agreed that 1969 was a step back in many ways for most of the world, but not technologically.
I disagree. I disagree so strongly it’s not even funny. On every single metric except what we think of as explicitly “political” ones, the average Chinese person – let’s say, a young rural woman – was looking forward to a much better, safer, more comfortable and far longer life in ’69 than in ’24. And yes, that’s after accounting for the famines and the Great Leap Forward and the Cultural Revolution and all the bad things.
I don’t know anything about Latin America, unfortunately, except for its early 20th c. image as the land of tomorrow that never came to be. So that comes across as far less implausible.
China is in Eurasia east of Trieste! Are you working for the DoRD?
But China’s mostly playing catchup to e.g. where the US was in 1969. If you look at the lifestyle of the cutting edge, seems clear there was a lot more change in 1850-1900 or especially 1900-1950 than 1950-2000. Steam, electricity, telegraph, photographs, IC engines, agrarian to industrial and then service economies. Vs. computers and smartphones.
But, but, that graph!
Do you have some examples of developments from 1925-1970 which majorly impacted people’s lived experiences, or a list elsewhere you could point me to? Thinking about it for a moment it seems like most of the more impressive advances – space flight, the invention of computing, military tech – didn’t change how people were living their lives.
Also I’m kinda dubious of the claim that tech progress doesn’t count as a major shift. Mobile/smartphones and the web, especially, seem like they ought to count. Unless there’s some other reason we’re discounting tech, in which case, what is it?
I think they were developed earlier, but washing machines and vacuum cleaners became widespread during that period, which greatly changed how people live their lives.
Also, refrigerators. My grandma had to keep butter under water so it didn’t go bad.
Also cars. Ok – everything that makes a “modern” car a car was invented before 1925, but it wasn’t until about then that most cars sold were something that you or I would find familiar (provided you know how to drive a standard transmission car).
jet planes, superhighways, skyscrapers
Ah, thank you.
It does sort of seem to me that pretty much all examples so far came out of improvements in a single field, namely, materials science, much like most modern technological advances come out of improvements in computing. Does anyone have counterexamples, or am I otherwise wrong?
NASA recently estimated how much time it would take to get back to the moon as longer than the original 10 years. Has it learned nothing?
If NASA is driven by cost-benefit considerations, why did it switch to the space shuttle? Perhaps that is a more cost effective way of scaring Soviets than going to the moon yet again, though it ought to have stopped being scary when unveiled. But it’s not a good sign for progress that illusions are more impressive than technical accomplishment. Indeed, some speculate that people working on the space shuttle and sdi didn’t even try to produce more than an illusion.
I’m not an expert in this subject, but from what I remember, the answer is “because the space shuttle was billed as being way more cost-effective than it really was”. Initially the Shuttle was planned to launch about once a week, leveraging economies of scale in the non-reusable components and enjoying aviation-like reliability and mission readiness; instead it ended up launching every couple of months, and despite everyone’s best intentions about 2% of all launches ended up vaporizing a schoolteacher or something, leading to further delays and expenses. Cost overruns are common in government projects, but here we’re talking more than an order of magnitude.
There were also issues of mission drift as well. The original designs were to be wingless (saving weight) and had other advantages that were overruled by Air Force requirements (which would have increased launch frequency).
There was also a bit of a “chicken outcompeted the egg” problem, I think especially early on. The Shuttle has a heavy lift capability that few applications need. The applications that do need this were outcompeted by the shuttle.
> NASA recently estimated how much time it would take to get back to the moon as longer than the original 10 years. Has it learned nothing?
I’m trying to find this estimate. Do you have a source? The one I could find was 13 years to build a permanent settlement on the moon as a step towards sending people to Mars. I suppose the 10 years was when they would first touch the Moon under that plan. Most of that time would not be spent re-developing the ability to get to the moon, but developing for the first time the ability to place a permanent settlement there – something WAY beyond Apollo.
If a monolith suddenly appeared on the Moon, I’m sure we could get people on-site much much sooner than 10 years.
The Constellation Program proposed 16 years to the very first landing on the Moon. Yes, this would be followed by a long-term bases to practice for Mars, but these were not even assigned a date. Von Braun was also trying to get to Mars.
Okay… how does taking a little longer to do something around two orders of magnitude harder count as stagnation?
WTF? The 2x timeline wasn’t for harder things, but just a first landing. The harder things weren’t “a little longer” but never even assigned a date.
Let’s be more concrete: NASA spent 2004-2009 failing to recreate the Saturn I. It definitely cannot accurately budget and timeline recreation of half century old projects.
1) WTF indeed. You don’t go to the moon until you have something to DO there. This isn’t a space race this time… what was the mission to be performed on that first landing?
2) the Delta IV heavy is stronger than the Saturn I. Yes, it’s weaker than the Saturn V.
3) With all the other stuff NASA has been doing, and a reduced budget, it has much less than half the $ resources it did then.
>I will maintain that even if we had enough space flight technology that elementary school classes routinely took field trips to Mars, Mars would end up with two or three scientific bases, a resort where tourists could take their pictures on Olympus Mons, a compound of very dedicated libertarians, and nothing else.
I don’t think you have given enough thought to the practical realities here. Is crashing asteroids into planets in order to get at their minerals *incredibly* lucrative from a resource gathering perspective? Yes. Can you do it on Earth? Good fucking luck. Is building fusion pulse generators (giant structures, possibly underground, filled with some working fluid in which you detonate fusion, or fusion boosted fission bombs) to generate power the single most energy-lucrative thing we could do with known technology? Yes. Can we do it on Earth without Greenpeace and the US government and God knows who else crawling up our assholes and eating our insides? No.
Are nuclear pulse propulsion craft the best way to get mass moving wherever we want it that we know of, by many orders of magnitude? Yes. Can we build those craft on Earth and launch them from Earth? LOL.
Nuclear power is to chemical power what chemical power is to fucking draft animals. It’s an *incredible* potential source of ability to fucking *do things,* and it is unbearably restricted on Earth precisely because of that potential. It is almost impossible to overestimate the importance of being able to use the most powerful workhorse in the known universe, fusion power. And we can’t do it here on Earth because we have to make a good, relatively macro-scale imitation of the conditions in the Sun to usefully harness it. On Mars, that ain’t a problem.
Oh, and of course those asteroids I was mentioning earlier are incredibly rich in the stuff we need to harness fusion power, fissionable elements. There’s just an amazing amount waiting out there in space for us to access when compared to what we can scrounge from our crust.
So why not the Moon? Well, first of all the Moon is a hellhole compared to Mars. Mars has CO2 in the air and tons of water in the soil which can be made into pretty much everything we need to fuel vehicles and ourselves. Also the Moon is a little too close for the asteroid crashing thing I mentioned to escape busybodies who care about things like species survival. And Mars is a good fixer-upper planet because once you crash a few comets into it it should warm appreciably and there should be even more airborne CO2 and water in the atmosphere for us to use, which in turn makes Mars warmer, etc.
What is so lucrative about mining asteroids? If you have actually given any thought to “practical realities,” how about some numbers?
“What is so lucrative about mining asteroids?”
They have a shitload of stuff we want.
“If you have actually given any thought to “practical realities,” how about some numbers?”
Go look up metal rich asteroids.
I think the inquiry is less about the payoff potential and more about how one would go about collecting asteroids, crashing them into planets in a relatively safe way and the costs of the technology and organisation involved in such an endeavour.
It may well be that we have the technology to do so, but perhaps it is too expensive or it is not advanced enough to mitigate risk to an extent that it becomes less of an expensive, risky gamble.
Yes, asteroids have a lot of stuff we want in them.
So do, for example, the Turquoise Hills in the south of Mongolia and the Sudbury Basin in the middle of Ontario.
The Turquoise Hills are two hundred miles from the Chinese border where the stuff would be useful; the Sudbury Basin is five hundred miles from Detroit; the asteroids are two hundred million miles from anywhere that the stuff might be useful.
>It may well be that we have the technology to do so, but perhaps it is too expensive
Obviously it’s too expensive now. I was replying to Scott’s general notion that there’s no compelling reason to go to Mars, with a compelling reason to go to Mars. Oh, another thing is that once you’re building rockets on Mars you’re way ahead of Earth because it’s waay easier to launch from Mars. Much thinner atmosphere, less gravity.
It’s absurd to use existing space costs as a frame of reference for determining the practicality of future space ventures when we don’t even have reusable chemical rockets yet. And there is mounting evidence that reusable rockets are possible, given spaceX’s first stage return successes and of course the shuttle program’s admittedly inefficient re-use (which nevertheless proves that re-use is possible in principle.)
>the asteroids are two hundred million miles from anywhere that the stuff might be useful.
Yeah, so we change the trajectory, crash it into a place where it can be useful. Not really viable for Earth, viable for Mars.
Asteroid mining is interesting because it may be practical to collect a huge amount of precious metals for a really low investment. It’s not like any kind of normal mining operation.
Are you talking about Project PACER? Or am I reading too much into your description? I’d think that any inhabited planet with fluid transport would have roughly the same odds of radioactivity containment failure from repeated underground explosions.
>I’d think that any inhabited planet with fluid transport would have roughly the same odds of radioactivity containment failure
Yeah but who cares? We aren’t gonna be eating Martian fish. We aren’t gonna be breathing Martian outside air. We aren’t gonna be sending our kids to play naked in Martian soil. If there’s some radiological contamination, fine – as long as we don’t do something *absurdly* stupid we should be fine, since the reality of Martian life means we can’t be in direct content with the outside Martian environment anyway. Direct irradiance is a *very* negligible concern compared to breathing the shit in or eating it, or even getting it stuck on your skin in the form of dust.
uh, what? nerva had an isp of 850s (according to wikipedia). iirc the chemical boosters for the saturn v moon lander had like 400s isp; yeah, that’s a pretty amazing boost (especially when you consider knockdown effects of fuel economy) but it’s not like “fucking draft animals” have an isp of 200s. :l
assuming you’re talking about fusion engines, not fission: nobody here has the credentials to even guess at what they would look like, whether they’d be plausible (or even physically possible), etc. ad infinitum. it doesnt make any sense at all to make conclusions from it.
on asteroids: bringing a sizable asteroid (ei, not just for research purposes) back to earth requires something like a couple of km/s of delta v. note that you’re dealing with a payload of many tonnes; it’s just not plausible. mars might be because a colony can be self-sustaining, but asteroids… nope.
NERVA isn’t all that hot as nuclear thermal designs go; a glance over the Wikipedia page suggests that 2000s wouldn’t be out of the question for a more modern closed-cycle gas-core design, or up to 5000s for the dirtier and more speculative open-core type.
Nuclear pulse propulsion is even better, with theoretical specific impulse in the high thousands of seconds for a basic Orion-type design up to around 100,000 for more speculative designs.
A third option would be a nuclear-electric rocket, with a small nuclear reactor powering one of several types of electric thruster, usually using ionized gas as reaction mass. Project Prometheus would have gone this route. This is strictly an upper stage, though; even relatively high-thrust plasma systems like VASIMR don’t have the juice to drive anything out of a gravity well deeper than a grapefruit’s.
on thermal nuclear: wikpedia says with regards to gas-core engines
so yeah, this fits comfortably into the “cool but not revolutionary” category.
as for pulse nuclear, it’s sufficiently speculative that nobody can really say “it will have at least this much isp”, excepting the very few tests that were done. and again, a few thousand isp isn’t nearly enough to bring back an asteroid of any size (especially considering that you can have to use standard propulsion until you’ve already gotten to it). as for even more theoretical designs, i highly suspect that they’ll go the way of basically every other technology like them: they’ll turn out to be incredibly underwhelming when we get close enough to actually use them. i concede that nuclear pulse designs would be very useful for redirection.
on nuclear/ion hybrid: is isp really the limiting factor of an ion engine? ultimately the thrust will be too low for any burns below solar orbit.
For ion, in its applications, poor thrust usually isn’t a problem.
You are ahead of me. Even a 1000s nuclear thermal rocket can achieve orbit from earth with a mass ratio of only three (commercial airliners have a ratio of two). On Mars it is even easier and can use the atmosphere as propellant.
yes, because it isn’t applied in situations where its poor thrust is a problem. this seems obvious.
don’t get me wrong, it is pretty awesome. it’s just not the “incredible source of energy to fucking get things done” that ben billed it as.
>assuming you’re talking about fusion engines, not fission: nobody here has the credentials to even guess at what they would look like
Project Orion.
>on asteroids: bringing a sizable asteroid (ei, not just for research purposes) back to earth requires something like a couple of km/s of delta v. note that you’re dealing with a payload of many tonnes
See propulsion methodology of Project Orion. Also we don’t/might not wanna move it to Earth because of the risks, perceived or real. Mars is a much better environment for that sort of thing, as it’s not covered by vulnerable, nervous, nuclear armed humans.
Also it is utterly absurd to assume NERVA is anywhere close to the peak of possible nuclear thermal efficiencies. Especially if radiological contamination isn’t a big concern, like say ON MARS.
okay, addenum: “aside from saying ‘we’d explode some stuff behind our rocket’, nobody here has the credentials to even guess at what they would look like”
i take back what i said about “several tonnes”, it’s more like several billion tons. you’d have to impart on the order of some petajoules of energy. and with the final stage. i shouldn’t have to tell you that this isn’t feasible.
sure, but it’s around the same magnitude of absurdity as assuming that alchohol propulsion is anywhere close to the peak of possible chemical engine efficiencies (spoiler: it’s within a couple of hundred seconds). when making speculations on the efficiency of future technology, it’s probably best to be conservative.
There’s a lot of rocks out there in a lot of orbits. The Wikipedia page on asteroid mining mentions a number of asteroids in the 2-20 meter range that could be brought into Earth orbit with less than 500 m/s^2 of delta-V.
That’s still pretty daunting, though. A 10-meter M-class spherical asteroid would weigh around 14 million kilograms. Applying that kind of delta-V to it would take on the order of seven billion newtons of force. That’s way too high for chemical rockets; it’s the equivalent of a few dozen Saturn Vs. It looks a little saner for nuclear electric systems, but even for a 5-megawatt VASIMR unit (which is huge; current ones are about 200 kW), we’d be talking a burn time of about 21 months. Also a lot of reaction mass, although you could probably obtain some of that in situ; one of the nice things about VASIMR is that it isn’t too picky about what you push through it.
I don’t feel totally comfortable speculating about nuclear pulse propulsion in this context, but there we’d be talking thrusts in the meganewton range. You do the math.
I think we could probably do it, if we put our mind to it, but it wouldn’t be easy. And it almost certainly wouldn’t be economical; 14,000 tons of nickel-iron isn’t worth expending that kind of energy. I don’t know what fraction of heavier metals you’d get, but it’d have to be pretty high for this to be worth a shot.
>okay, addenum: “aside from saying ‘we’d explode some stuff behind our rocket’, nobody here has the credentials to even guess at what they would look like”
This is pure laziness. Please look into Project Orion, it wasn’t just “Hey guize let’s detonate nooks behind us it’ll be kool lolololol”
>i take back what i said about “several tonnes”, it’s more like several billion tons
Depends on the asteroid. Depends on how much of the asteroid you want to move. Nukes are an extremely practical way to blast things apart, doncha know.
>sure, but it’s around the same magnitude of absurdity as assuming that alchohol propulsion is anywhere close to the peak of possible chemical engine efficiencies
No, it fucking isn’t. And anyone with basic chemistry knowledge could tell you that. Carbon chain with hydrogen stuck on it isn’t terribly, outrageously different from other carbon chain with hydrogen stuck on it in terms of energy per weight. And ISP is about temperature, and the limiting factor for temperature is keeping the rocket intact NOT fuel energy density. Though alcohol is not ideal, worse per liter and per kg than RP-1 in terms of energy content. Possibly bad for seals too? Seem to remember something about that for my stove, at least.
>And ISP is about temperature, and the limiting factor for temperature is keeping the rocket intact NOT fuel energy density.
I should say – ISP is about temperature for a given fuel paradigm. Have to compare apples to apples, hydrocarbon engine to hydrocarbon engine. You get more ISP with less massive fuel. Which is why NTR is so amazing even in its larval stage, you don’t need oxidizer at all so you can use all LH2.
@norn: fair enough. i was just looking for the largest metal-rich one, because i assumed that economies of scale would make it more worthwhile; i forgot to factor in distance.
are the units of delta v m/s2? i’ve been saying m/s for a while now. oh dear.
(also, i’m grateful that you’re so civil even when everyone else is being abrasive and aggressive. thanks.)
@ben:
please don’t be condescending. fusion engines are very speculative, entirely unproven, and require technology that does not yet exist. ofc “we’d explode some stuff behind our rocket” is a hyperbole, but there haven’t even been simulations done yet.
hydrocarbons aren’t really used as a rocket propellant anymore? unless you’re russian, i guess. generally it’s either liquid hydrogen or a nitrogen compound. plus oxidizers have gotten way better than liquid oxygen. and fuel isn’t the only advance in liquid propulsion; now we’ve got bell nozzles and so on.
… what? no. that’s trivially false. if it wasn’t, every fuel would have exactly the same isp, and advances would be entirely in engine design. it is *theorized* that we might *eventually* get to a point where nozzle tempurature becomes a limiting factor, but it certainly hasn’t happened yet. (also only the “i” in “isp” is capitalized)
EDIT: apparently you corrected yourself. you’re still wrong. “less massive” means literally nothing. if you meant “less molecular mass”, liquid fluorine as an oxidizer has strictly higher isp and molecular mass so it’s obviously not a tell-all trend. nor does it have anything to do with maximum temperature.
No, you’re right, it’s in m/s. My mistake; not thinking too clearly today. That means that instead of seven billion newtons of force, we should be talking seven billion N-s of impulse.
The rest of my math is still valid AFAICT, with the exception of my estimate for chemical rockets, where I neglected burn time. It appears that a complete burn of the Saturn V’s first stage would be just short of the impulse required, assuming the mass lost during the burn is negligible compared to the asteroid’s mass.
>please don’t be condescending. fusion engines are very speculative, entirely unproven, and require technology that does not yet exist
Speculative? Sure. Unproven? Sure. Require technology that does not exist? Depends how broadly you intend that phrase to be taken. The Project Orion folks were quite detail-oriented, and made a very good argument that this could be done with modern technology. Attack that argument if you will, but dismissing it out of hand simply reveals your ignorance.
>hydrocarbons aren’t really used as a rocket propellant anymore? unless you’re russian, i guess. generally it’s either liquid hydrogen or a nitrogen compound. plus oxidizers have gotten way better than liquid oxygen.
oh god
http://en.wikipedia.org/wiki/Soyuz-2_%28rocket%29
LOX/RP-1
http://en.wikipedia.org/wiki/Falcon_9
LOX/RP-1
http://en.wikipedia.org/wiki/Atlas_V
LOX/RP-1
> apparently you corrected yourself. you’re still wrong. “less massive” means literally nothing.
This is absurd. What the fuck else could I mean when we’re talking about fuel ISPs?
>liquid fluorine as an oxidizer has strictly higher isp and molecular mass so it’s obviously not a tell-all trend
Could you get me some information on that test engine? AFAIK it never flew and was never supposed to fly. Things that don’t have to fly have different design constraints. One can make LOX-RP-1 go faster if one plays with weight or dimension or efficiency constraints. I’m sorta interested in the ISP of a liquid flourine engine that can actually get off the ground. But anyway, the point is that within any given fuel paradigm, ISP is about temperature, which is constrained by the engine’s ability to survive.
>… what? no. that’s trivially false. if it wasn’t, every fuel would have exactly the same isp, and advances would be entirely in engine design… nor does it have anything to do with maximum temperature.
The modern chemical rocket paradigm is all about advances in engine design. And yes, ISP is about temperature within a given fuel paradigm – what the hell else do you think it’s about? And the temperatures you can reach do depend on how good you are at keeping your engine together.
>every fuel would have exactly the same isp
I don’t think that’s a misconception that my previous posts could honestly lead to, but if so consider it corrected.
again, please don’t be condescending. they were detail-oriented, yes, but fell far short of providing a working schema or anything remotely tested. as for “made an argument that it coud be done with modern technology”, afaik they only argued that fission nuclear could be done with modern tech. fusion was left relatively alone.
not really proving anything. only a fraction of modern rockets use hydrocarbons at all, and if they do it’s entirely in the first stage.
density? idk. it’s your job to make your text clear.
i don’t know about liquid fluorine, but details of various halogen oxidizers can be found on page 73 forward of “ignition!“. i’m not sure if there ever was a test of liquid fluorine specifically that was intended to fly, but there certainly have been tests of fluorine compounds.
source? i can’t find anything that refers to engine temp as anything other than a “consideration”.
because we’ve discovered basically every chemical fuel available. not because of temperature.
false dichotomy. it is about temperature, but the temperature you can reach is dependant on energy density and not engine integrity. (you’re still capitalizing the “sp” in “isp”?)
it did honestly lead to it, so evidently your model needs correction
Robert Zubrin suggests that a nuclear salt water rocket could obtain 10,000s isp.
man, remind me never again to question anything about the future on this website.
i’m going to assume “the only nuclear rocket design that has been thoroughly tested and proven” is a fairly representative (if very rudimentary) member of its class, robert zubrin’s suggestions aside.
A nuclear salt-water rocket is an atom bomb in liquid form. It’s not something you can build on Mars – you need an industry complex on the scale of Oak Ridge to get the 235U or 239Pu. It’s not something that you can fire off from Earth’s surface even in the middle of somebody else’s least-favorite desert; it’s not something you can plausibly get permission to launch from Earth’s surface and fire off elsewhere, because it is actually reasonable to worry about the consequences of dropping tons of enriched-uranium bromide in a launch accident.
We can totally launch nuclear pulse propulsion craft from the Earth. If we hadn’t gotten into the Vietnam War, we likely would have.
They may be one of the safest methods of space lift per hundred tons to LEO.
>We can totally launch nuclear pulse propulsion craft from the Earth.
That would require major liberalization of nuclear weapons policies, and nuclear test policies. We’re going in the opposite direction, and here on Earth I am not entirely sure that’s a bad thing.
>They may be one of the safest methods of space lift per hundred tons to LEO.
They are also a good way to annihilate an opponent without facing repercussions. Once you have nukes in orbit it is a simple matter to make a really devastating first strike/counter the second strike. Oh what’s that, you would never use your orbital nukes for that, you just want to move stuff? Riiiight.
I could see it happening if one great power becomes utterly dominant, but again we’re heading in the opposite direction.
I think one of the Larry Niven books had a quote something like “the power requirements for moving between planets are on the same order of magnitude as the power requirements for punching large holes in them”.
Along with your transhumanist fables post, this will be required reading for my fall Economics of Future Technology Course at Smith College.
Imagine a different kind of thought experiment. Suppose you told someone technically knowledgeable in 1969 that for the next 45 years, there would be tremendous advances in photolithography, making it possible to manufacture cheaply small devices containing electronic circuits of the same basic types already known in 1969, only several orders of magnitude more complex — and that every other field of technology would stagnate.
How would this person’s extrapolation from 1969 under this assumption differ from the actual history?
(I’m not claiming that this is an accurate summary of what happened, but to me this thought experiment does suggest that it might be a decent first-order approximation.)
Depends on who that person is, and the tone of voice with which you say “several”.
“For better or worse, video games now exist.”
The video game industry exists. Video games were first invented in the 1950s and you had arcade games and consoles by the 1970s.
“World population has tripled, in large part due to new agricultural advantages.”
The Green Revolution was in the 1960s. I was under the impression the majority of the increase in food production since then was in the diffusion of that level of technology to places like China and India and not the invention of new items like GMOs (I’m not positive as the USDA records high levels of continuous productivity growth for the US).
“We have a place where you can log into from anywhere in the world and get access to approximately all human knowledge, from the scores of every game in the 1956 Roller Hockey World Cup to 85 different side effects of an obsolete antipsychotic medication. It is all searchable instantaneously. Its main problem is that people try to add so much information to it that its (volunteer) staff are constantly busy deleting information that might be extraneous.”
Wikipedia is not a technological advance. It is an online encyclopedia using volunteers. Books and webpages are technological advances. Encyclopedias and wikis are not.
> Books and webpages are technological advances. Encyclopedias and wikis are not.
Why not?
This post is great, thank you so much!
When I think of the Great Stagnation I think of the FDA destroying drug development to the point where we have exploding obesity; the degradation of many sciences to the point where dieticians can’t solve obesity; the fact that math papers from 1960 seem far more readable and friendly despite, or maybe because of, not having access to LaTeX; I think of university systems dying and exploding student debt and real median income; I think of slums that weren’t still supposed to be there, and probably wouldn’t be there if things had improved for the poor at the same rate they did between 1930 and 1970.
I don’t think of flying cars or Mars, but the fact that I think of these other things does make me sympathetic to the picture.
Obesity is in part a sign that we’re solving the much more serious problem of hunger, to the point where hand-wringing articles about obesity in developing countries are a thing. It’s like how rising cancer rates are largely due to people not dying of cholera, dysentery, or smallpox before they live long enough to get cancer.
This seems US-centric. Certainly European drug agencies have gotten stricter, but they’re significantly laxer than the FDA, and there’s a lot of research going on on this side of the pond. The university system doing… whatever it’s doing in the US seems completely absent from countries where tuition is low and poor students get scholarships rather than loans.
However, median income is stagnating, and slums, which had disappeared from (European, don’t know about overseas) France in the 70s, have been returning since the 90s. But this seems to be the Mystery of What’s Happening to All the Wealth We Create (with Our Fantastic Tech), not a lack of tech.
The FDA and EMA leapfrog each other every decade.
If only it was actually a mystery.
The question is, is 1970-present unusual in failing to achieve a bunch of things it seems like it should have? Or were people in 1970 complaining about how they hadn’t fixed the business cycle or stopped having war or come up with anything like antibiotics for viruses (or whatever; this isn’t meant to be a particularly likely list, just to convey the idea of ’70 feeling like today)?
Here’s something like antibiotics for viruses. Also see a more up-to-date presentation from SENS6 on the topic.
Almost all of those are social problems.
1) Obesity. Yeah, that’s what happens when you give food designed for starvation conditions to bodies designed for starvation conditions in modern abundance.
2) Dieticians cannot solve obesity for as long as we insist on feeding people cake for bread, sugar sauces for spices, fried meat as a main course, and making fish and veggies more expensive than they should be. Also, portion sizes in many places are too damn large.
3) LaTeX has definitely made math papers searchable, both in the sense of being able to find them online and in the sense of being able to search for keywords. Mind, I do feel a current of obscurantism running through large portions of scientific academia when I realize the sheer size of the gap in explanatory quality between an average research paper and a well-written academic book or monograph. Surely the journal reviewers have an easier time understanding the latter than the former too, but then again, the journal reviewers may be rewarding obscurantism.
4) This is largely an issue of state-level funding for public universities. We once had a system designed to make university cost very little via government funding. Now we have a system designed to award lots of aid to the Very Special Snowflakes who manage to get in despite being lower-class.
5) Slums are an issue of racism and classism. I am not joking. If you don’t believe me, go ask someone who studied urban planning, especially if they live in a gentrified neighborhood. Back around the ’60s and ’70s, dense urban cores were thought to be relics of a bygone age, best left to the undesirable classes, and suburbs to be the wave of the future. Result: as planned, slums in the urban core and wealthy, comfortable suburbs for those who can stand driving everywhere. Not planned: an oil crisis and several major economic shifts that made the urban cores become the desirable location, leading to the humorous picture of the professional salariat bidding huge portions of their monthly income on what were literally slum apartments only one or two decades ago, while the suburbs suffer a cancer of poverty.
6) Stagnation of living standards for the poor? Well first, slight correction: for the poor and the working class as a whole. In fact, for most everyone in the First World who isn’t some kind of educated professional. There were many components of this problem, but the maximum-likelihood candidate for the single largest contributors are: unbalanced “free” trade agreements, and financialization. Note the scare-quotes: actual “free trade” agreements have been basically nothing like what competent economists advocate, instead functioning largely to take big colored markers and say, “These countries will do banking, and these countries will do farming, and these will do manufacturing!” The result, of course, is that the banking countries stayed rich but got “hollowed out” right up until the inevitable financial crisis hit, while the manufacturing countries built themselves up the only way anyone ever has: through high-value, physical-capital-intensive export industries, infrastructure upgrades to the domestic economy, and the deliberate creation of a domestic consumer class. Witness how China and India are now doing versus any part of the Anglosphere, or Germany and France (contrary to popular opinion, France is actually very productive) versus the rest of Europe.
India doesn’t actually do much manufacturing. At least compared to China. The US has a lot of manufacturing now, it just doesn’t have many manufacturing jobs.
Disagree about the first one. It isn’t known what specifically makes the Western diet, and only it, make people too hungry to work or think when they go back to the old portion sizes.
About fifteen years ago the NAS commissioned a bunch of articles called Beyond Discovery that talk about the paths to a few recent technologies. Some highlights–actually, wait.
This is going to be a long comment, sorry. I should probably write more on this in my own space.
Look, sure, lasers and transistors and all these things were “invented” before 1970 or whatever. This matters basically not at all relative to the improvements since then. Our storytelling about science sets us up not just to place way too much importance on “discovery” and too little on “incremental science” and “diffusion” of technology but also to overestimate the difference between these things. It also fucks with our already-bad hindsight bias: whatever the 50-years-from-now obvious-in-retrospect huge transformational development is, it’s today going to look like the kind of random basic research that early lasers were. But we’ll still say it was “invented” around now (or in the 60s when someone first proposed or demoed an early version that would never have been practical without recent advances).
I’d guess that, yes, there’s less low-hanging fruit now. We basically know all of the physics of everyday life. We’re pretty sure there’s not going to be another quantum mechanics, and that means there’s not going to be another transistor or another laser. (At best, something will happen with high-Tc superconductors.) But I’d still argue that this doesn’t particularly matter given the huge landscape of what people keep calling “incremental” improvements, and that important developments are still happening. Also important is that government no longer funds as much of the basic research that tended to lead to the transformational changes people are looking for (and the private sector has poor incentives to do so itself). But my sense is that a large part of the apparent stagnation is just bad history of science, bad philosophy of science, and bad futurism, plus a large helping of generally not paying attention.
OK, now some highlights from Beyond Discovery (~2000):
Childhood leukemia survival rates are way up since 1970 (and continuing to rise), and this is definitely not just because better screening catches marginal survivable cases. (Really, so much has been happening in medicine. It’s easy to skim over Scott’s paragraph on that above. Please don’t.) The GPS project was finished in 1993 (after ~20 years), and has seen continued improvements and more open access. GM seeds hit the market in 1996. Gene testing didn’t have a path to practicality until probably the late 80s. Fiber-optics started being developed in the 60s, but nobody made anything even potentially practical for moderate distances until 1970 or commercially viable until ~1980; EDFAs didn’t show up until 1986 and the Pacific fiber wasn’t laid until 1996; all these technologies are still being improved, for example with photonic crystal fibers commercially available since ~2000. (Also, the steady improvements since 1970 in laser cost, efficiency, maximum power, stability, etc. are a huge deal, so I’m not sure why that keeps getting dismissed–it’s not “the same but better” but “now we can do all these new things that were out of reach before”; not to mention orthogonal developments like frequency combs and ultrashort pulses.) Tissue engineering is new and improving. PRK/LASEK wasn’t really feasible until the late 90s (oh, and it relies on post-1970 laser developments, even if you inexplicably don’t think “hey we can use these on cataracts” counts as one). Sonar for oceanography is new and more important than you probably think (and it really doesn’t matter that sonar was invented by Behm in 1913 or da Vinci in 1490 or whatever you’re thinking). The whole ozone depletion thing was figured out and acted on post-1970. Wavelet transforms are post-1980 and everywhere now (and again, yes, people had figured it out earlier, in many separate domains; this is a great example of how “diffusion” of technology isn’t magic, it doesn’t come for free, the work of generalizing and communicating and creating more experts and reapplying is a big deal, and so on).
On childhood leukemia, check out the survivorship curves (specifically for acute lymphoblastic leukemia) over time. It’s one of my go-to illustrations for the power of research.
i think y’all are missing something fairly important here, in saying that ‘basically every field, excepting computer science, has stagnated’. this might be true, i have no idea. but you’re treating research as an endless resource, or at least one independent of activity in other fields: in real life, it has to come from somewhere. i’m not claiming that the government has moved all its resources from the material sciences over into cs (truth to be told, i can’t find any statistics on it), but there certainly has been a large-scale shift towards computing research when it comes to the development “budget” of humanity as a whole.
a compilation of various publication databases (table) shows that chemistry and physics abstracts are subject to a (rapidly fluctuating) growth rate of around 2-4% per year. this obviously doesn’t mean much on its own — people may just be less publicating-inhibitted — but growth in computer science/electrical engineering publications is way up at 6%/year. for reference, that’s around the growth rate of the material sciences at their very peak, during 1907-1960.
sure, a nontrivial portion of the observed effect probably is due to exhaustion of the low-hanging fruit — but an awful lot of it is because we’ve found another tree.
One reason this “debate” can be frustrating is that nobody defines what they are actually talking about when they say “technology.” So people start throwing around individual metrics that sound impressive to them and then the other side says that’s not so impressive, because metric X, which they think is more impressive, has remained stagnant since the 1970’s. Without even acting in bad faith everbody becomes the goalpostmovingest goalpostmovers who ever moved goalposts.
My opinion is that our economy used to have fewer built-in perverse incentives, and so things were more efficient. For example, a PhD used to mean that you contributed something meaningful to your field, it created value intrinsically, now it is just another round of qualification to be passed without necessarily creating value.
The difference between colonizing Antarctica and colonizing Mars is one of perspective – perhaps it’s like comparing a mild dose of marijuana to a full dose of LSD. To travel across tens of millions of miles of interplanetary space, to colonize another planet, to see the Earth as a pale blue dot in the sky every day – try to imagine the effect this would have on not just the colonists’ consciousness, but humanity’s in general. I don’t think this should be downplayed or dismissed as irrational – it is the great failing of this age that we have such a lack of this sort of vision or an appreciation for the “spiritual” power of such achievements.
The phenomenon I’m talking about has been called “the Overview Effect” – the change in consciousness that many astronauts have spoken of after orbiting the Earth or traveling to the moon. One can only imagine the effects of years spent in space and on another world.
There was a great deal of this sort of “impractical” talk in the wake of Apollo, but it seems to have gone out of fashion. And that may be more telling and tragic than the fact that we haven’t gone beyond low earth orbit since 1972. I believe mankind needs another mind-expanding adventure in the Cosmos soon to wake us from our visionless malaise, or we may never get the chance again.
There are smartphones with Tb of memory?
Just a sidenote:
I think space colonization will only start in earnest after we have whole brain emulation, everyone lives in virtual reality anyway and it doesn’t matter whether the servers are on Earth, Mars or wherever (except access to resources on one hand and communication latency with other servers on the other hand).
One reason is that being a canned primate is not so fun. Another is that bootstrapping a self-sufficient technologically advanced economy is very hard without something like nanotechnology (and by the point we have nanotechnology, WBE shouldn’t be hard).
I find our slowing rate of technological progress so striking, especially when it comes to space exploration, that I am starting to think it may be a solution to the Fermi Paradox.
Consider: Through demographic transition, populations inevitably reach a point where the fertility rate declines below replacement level. This has already happened in several areas of the globe. Countries like the U.S. with large immigration programs will be able to keep growing for many years, but eventually every region of the world will have gone through transition and the sources of new migrants will dry up.
Eventually the world’s population will level out. United Nations projections are that this will happen about 2150, at about 10-12 billion people. After that, based on current demographic trends, world population will start to decline.
The human species is currently very very very far away from the technology that might make interstellar exploration possible. Unfortunately, there is a clear link between population growth and technological growth. If nothing else, population puts a limit on the number of people who can dedicate themselves to science or research. It’s also clear that we are entering a period of diminishing returns: a century ago, a single gifted scientist like Einstein could make major breakthroughs on his own in a single year. Now, it takes teams of many scientists many years of research to progress the state of knowledge.
For these reasons, it seems logical to conclude that humanity will never reach the technology necessary to explore the stars. The gap between what is needed and where we are likely to be is just too huge.
Now, I don’t believe it is a coincidence that humans are going through demographic transition at this point, and so far away from the possibility of interstellar travel.
Humanity did not arise with the ability to engineer computers, modify crops and build massive dams. We evolved in a Malthusian environment with high birth and death rates. The ability to manipulate the environment took many generations of incremental cultural knowledge accumulation, until plant domestication, writing and other breakthroughs combined to allow population explosion and economic/technological growth. The bulk of our ability to control the environment is cultural, and hard-earned.
It’s notable that as our ability to manipulate the environment has grown, so has our need to provide longer and longer upbringing for children. Hunter gather societies raise children typically until puberty or just after. In contrast, western developed societies have reached a situation where the period children need support and maintenance is just about three decades. This trend is likely to continue, with more and more resources to be invested in fewer and fewer children.
There are good reasons to believe that these trends may operate on alien planets with alien species. Naturally, like ours, an alien species will not have evolved with the ability to immediately control its environment. That’s not how evolution works. Instead, evolution will equip a species with intelligence and the ability for cultural accumulation of knowledge. That species will in turn take a long time to achieve technological dominance of its environment. It seems likely that, like us, a large amount of time and resources will have to be invested in the young, so that they are able to deal with an increasingly complex society and environment. After all, extended child rearing is the one thing which sets us apart from other animals. It must be characteristic of an intelligent species.
Investment in child rearing is certainly characteristic of intelligence on Earth. That cannot be an accident.
Therefore, an alien intelligent species is likely to suffer from the same pressure to lengthen child rearing and support that we are under.
Once they obtain the technology for birth control, and have dominated their environment enough to ensure food and material security, they too will be under pressure to go through demographic transition.
I conclude that demographic transition and its associated slowdown in technological growth are inevitable for all intelligent species anywhere in the galaxy…
So that’s why there are no aliens in the skies.
This assumes that every single intelligent species ever to exist managed to hit its “decadent phase” of slow-to-negative technological growth prior to developing artificial intelligence to run their technology for them. It’s also worth noting that far from long periods of child-rearing and low birth rates being a bad thing, it’s precisely the long-reared, low-birth-rate, low-physical-labor societies that are busy developing robotics and AI.
So basically, your Malthusian hoards may have a fire lit under their butts, but we’re the ones capable of building and deploying (SAFELY, and I cannot emphasize that enough) something smart and versatile enough to obsolete the very concept of structuring lifestyles around work.
If you believe AI is on the way or is achievable in the not-too-distant future, this may be a valid argument. Otherwise…
I don’t think that is a plausible reason for Fermi Paradox.
The reason for demographic transition is that human psychology is poorly suited for modern society. Namely, humans lack basic desire to have as many children as possible, and proxies that were sufficiently reliable in the past work badly now. That cannot be true for all sentinent species, for simple statictical reasons.
Also, I think demographic transition will not last forever. When most people are religious fundamentalists, global birth rate will rise again. And if technological civilization survives long enough, the human species itself will change, evolutionary pressure is just too strong.
“When people are religious fundamentalists, global birth rate will rise again.”
Hmmm.
“The human species itself will change.”
Natural selection doesn’t happen at the species level, but at the individual level.
Don’t be a pedant, it’s obvious what he means. Selection affects individuals but populations are composed of individuals; putting a large pressure on most members of a population will in fact change it, the same way that painting every individual building on a street orange will change the color of the block. Besides if you really want to play that semantic game, selection happens on the level of genes not individuals anyway, as any number of examples of intergenomic conflict demonstrate.
Of course, the claim about religious fundamentalism is just as dubious as you imply. I don’t doubt that people will become more religious, but the specific kind of illiteracy which produces modern fundamentalism is historically rare; I’d predict a return to Traditional esoteric spirituality over that.
That’s not even as good an argument as pure nitpicking. Natural selection happens at the gene level, but the change it causes is at the species level. Individuals don’t evolve, they just fuck and die.
As I understand it, selection can happen on genes, individuals or even at some kind of group level.
But that’s not really the point.
Lalartu claimed that the “human species itself will change”, as “evolutionary pressure is just too strong.” Well, that’s not any kind of evolution that I know about, and it’s just hard for me to envisage how that would work. If by ‘evolutionary pressure’ Lalartu means ‘natural selection’, then this claim seems to me a misunderstanding of evolution.
Some people in first-world countries have many children now for no other reason than they want. Genes which lead to those desires have such a huge advantage that in just few centuries most humans will have them, and Malthusians will be right again.
As for education cost, that is purely social thing unrelated to technical progress at all.
Lalartu, okay at least I know where you’re coming from now. Your theory hinges on there being a genetic component to the desire to have a large family. Until recently humans haven’t had the choice one way or the other, but it’s an interesting idea.
Immortality is the obvious solution to the demographic transition problem.
I agree that immortality would make things interesting. It’s not clear to me that it would stimulate economic or technological growth, or lessen the burden of child rearing.
Imagine how big a student loan would be if you had 500 years to pay it off.
Or, consider the size of a home loan if the previous generation had a thousand years to save for a deposit…
That’s less of a problem if you consider that, unlike AmeriKKKa, in civilized nations the cost of attending college is zero or negative, and mortgage schemes have not been encouraged by insane technocrats as a form of debt slavery to stave off socialist sympathies in the face of the visible successes of the semi-planned economy.
The economically compelling use-case for lunar habitation is most likely going to be asteroid mining. Large, metal-rich asteroids are inevitably going to be mined for materials that are inaccessible on earth, and when that happens it will make much more sense to process them in Lunar orbit than in Earth orbit. While 99.99% of the process will be automated, there will still be situations where it makes economic sense to have human technicians on-site to diagnose and repair problems without a 1-second transmission delay.
Bringing usefully-large chunks of rock into Earth or Lunar orbit seems hazardous. If someone entrepreneurial can get it here from the asteroid belt, someone with a sense of humour can drop it onto Earth from orbit.
The result would be interesting, but I want to be elsewhere when it happens. Maybe Mars.
“There is no reason to colonize space before you have finished colonizing Antarctica. And there is no reason to colonize Antarctica until you have finished colonizing Nebraska.”
I’ve been saying this for years, but in a different way: Colonize Alaska! You can get a good size plot of land off the road system for $5000. Some hardy jobless millenials ought to at least consider coming up here to live off the land.
Pingback: Lightning Round – 2013/07/23 | Free Northerner
Well, I gave a presentation on this at Transvision 2004, and have written more on it since.
Sure, lots of impressive stuff has happened in the 45 years from 1969 to 2014. But lots more impressive stuff happened in the 45 years from 1924 to 1969. And in the 45 years from 1879 to 1924. And in the 45 years from 1834 to 1879.
It isn’t even close. I have had this argument too many times. Any claim that we haven’t entered a dramatic technological slowdown since about 1970 is based in ignorance of history.
Would you distribute these writings?
“But why were the engines so useless? Were there some technological advances that were needed? ”
AIUI the metallurgy of the time was nowhere near what you’d need for useful steam engines, especially the more efficient high-pressure ones. Plus metal was expensive. Also, industry uses rubber a lot for seals, and our primary rubber sources are New World plants. If you want to do without rubber you need really *good* machining. There’s some latex in some Old World plants, but clearly no one had noticed.
As for using wood instead of coal, the ancient world already had a deforestation problem just from turning trees into ships and firewood. Adding steam engines to the mix would help. If you try to use biomass in general as fuel… well, with a low efficiency engine you’re probably better off feeding the biomass to humans or animals and using their labor.
I think one reason for the appearance of slowed technological progress is that a lot of the stuff that the last half-century has been good for – medicine/genomics, computing, telecommunications – is that it’s largely ‘immaterial’. It’s not bigger louder faster stronger. This stuff has powerful aggregate effects, but these emerge through diffuse, decentralized use by billions, not through singular projects where specific people Decide To Make The Future.
I think a book which really made the point to me about what the informational revolution meant was James Gleick’s The Information: A History, A Theory, A Flood. A lot of the descriptions of the people doing crucial research into information theory (essential for basically all modern telecommunications, data storage, computing, etc.) involve them gradually feeling their way towards the actual, rigorous theories of information we take for granted, absent a sense that this was a grand and imperative human project. Building an intellectual edifice people didn’t even know they might want. Information theory and communication statistics didn’t appear to have obvious material applications, nor the transcendent cachet of things like pure mathematics. As we see it’s had huge material *consequences*, but those were mostly unforseen.
I’m a cartographer and a geographer – I make maps. I regularly handle data sets of millions of points and polygons, satellite imagery better than 1980s military-grade photos, and computational processes that simply could not have been done twenty years ago. Mapping, 50 years ago, was mainly a material, paper-and-pen-and-acetate process. You knew what you could do, and what was impossible, or impractical. The horizons of information were sharply limited – in the end, you could only collect and analyze a tiny proportion of the data that existed. Maps were only made to be rough approximations, because what would you possibly use a more precise map for? Now, it’s not so much a question of ‘can it be done’, but ‘is it worth bothering with, and will anybody use it?’ It’s not that I can do things the cartographers of the past only dreamed of – it’s that they probably didn’t even dream of them. It likely no more occurred to the average surveyor of the year 1950 to dream of an always-updating collaborative total planetary map than it would for a mechanic to design a pickup truck the size of an ocean liner. (“What for? Where would you drive it? What would you power it with?”) When the future’s progress consists of useful but prosaic things you didn’t know you needed, it’s hard to to get enthusiastic. But it’s still progress, I think.