One of the best parts of writing a blog is being able to answer questions like this. Whenever I felt like I understood new and important, I wrote a post about it. This makes it easy to track what I learned.
I think the single most important thing I discovered this decade (due to a random comment in the SSC subreddit!) was the predictive coding theory of the brain. I started groping towards it (without knowing what I was looking for) in Mysticism And Pattern-Matching, reported the exact moment when I found it in It’s Bayes All The Way Up, and finally got a decent understanding of it after reading Surfing Uncertainty. At the same time, thanks to some other helpful tips from other rationalists, I discovered Behavior: The Control Of Perception, and with some help from Vaniver and a few other people was able to realize how these two overarching theories were basically the same. Discovering this area of research may be the best thing that happened to me the second half of this decade (sorry, everyone I dated, you were pretty good too).
Psychedelics are clearly interesting, and everyone else had already covered all the interesting pro-psychedelic arguments, so I wrote about some of my misgivings in my 2016 Why Were Early Psychedelicists So Weird?. The next step was trying to fit in an understanding of HPPD, which started with near-total bafflement. Predictive processing proved helpful here too, and my biggest update of the decade on psychedelics came with Friston and Carhart-Harris’ Relaxed Beliefs Under Psychedelics And The Anarchic Brain, which I tried to process further here. This didn’t directly improve my understanding of HPPD specifically, but just by talking about it a lot I got a subtler picture where lots of people have odd visual artifacts and psychedelics can cause slightly more (very rarely, significantly more) visual artifacts. I started the decade thinking that “psychedelic insight” was probably fake, and ended it believing that it is probably real, but I still don’t feel like I have a good sense of the potential risks.
In mental health, the field I am supposed to be an expert on, I spent a long time throwing out all kinds of random ideas and seeing what stuck – Boorsboom et al’s idea of Mental Disorders As Networks, The Synapse Hypothesis of depression, etc. Although I still think we can learn something from models like those, right now my best model is the one in Symptom, Condition, Cause, which kind of sidesteps some of those problems. Again, learning about predictive processing helped here, and by the end of the decade I was able to say actually useful things that explained some features of psychiatric conditions, like in Treat The Prodrome. Friston On Computational Mood might also be in this category, I’m still waiting for more evidence one way or the other.
I also spent a lot of time thinking about SSRIs in particular, especially Irving Kirsch (and others’) claim that they barely outperform placebo. I wrote up some preliminary results in SSRIs: Much More Than You Wanted To Know, but got increasingly concerned that this didn’t really address the crux of the issue, especially after Cipriani et al (covertly) confirmed Kirsch’s results (see Cipriani On Antidepressants). My thoughts evolved a little further with SSRIs: An Update and some of my Survey Results On SSRIs. But my most recent update actually hasn’t got written up yet – see the PANDA trial results for a preview of what will basically be “SSRIs work very well on some form of mental distress which is kind of, but not exactly, depression and anxiety”.
One place I just completely failed was in understanding the psychometrics of autism, schizophrenia, transgender, and how they all related to each other and to the normal spectrum of variation. I kind of started this program with Why Are Transgender People Immune To Optical Illusions? (still a good question!), fumbled around by first-sort-of-condemning and then sort-of-accepting the diametrical model of autism and schizophrenia, and then admitting I just didn’t know what was going on in this area and not talking about it much more. I still sometimes have thoughts like “Is borderline the opposite of autism?” or “Are schizoid people unusually charismatic, unusually uncharismatic, or somehow both?”, and I still have no idea how to even begin answering them. Autism And Intelligence: Much More Than You Wanted To Know at least helped address a very tangentially related question and is probably the closest thing to a high point this decade gave me here.
The Nurture Assumption shaped my 2000s views of genetics and development. Ten years later, I’m still trying to process it, and in particular to square the many behavioral genetics studies showing nonshared environment doesn’t matter with the many other studies suggesting it does (see eg The Dark Side Of Divorce and Shared Environment Proves Too Much). I think I started to get more of handle on attachment theory and cPTSD as both being different aspects of the same basic predictive processing concept of “a global prior on the world being safe” – see Mental Mountains and Evolutionary Psychopathology for two different ways of approaching this concept. This made me conclude that I might have been wrong about preschool (though see also Preschool: Much More Than You Wanted To Know). Honestly I am still confused about this. The one really exciting major good update I made about genetics this decade was understanding and fully internalizing the omnigenic model.
One of the big motivating questions I keep coming back to again and again is – what the heck is “willpower”? I started the decade so confused about this that I voluntarily bought and read Baumeister and Tierney’s book Willpower and expected it to be helpful. I spent the first few years gradually internalizing the lesson (which I learned in the 2000s) that Humans Are Not Automatically Strategic (see also The Blue-Minimizing Robot as a memorial to the exact second I figured this out), and that hyperbolic discounting is a thing. Since then, progress has been disappointing – the only two insights I can be even a little happy about are understanding perceptual control theory and Stephen Guyenet’s detailed account of how motivation works in lampreys. If I ever become a lamprey I am finally going to be totally content with how well I understand my motivational structure, and it’s going to feel great.
Speaking of Guyenet, if nothing else this last decade has taught us that Gary Taubes did not solve all of nutrition in 2004, that Atkins/paleo/keto are good for some people and bad for others, and that diet is still hard. See the various Guyenet vs. Taubes and Taubes vs. Guyenet posts, and my 2015 The Physics Diet on where I was at that point. So what is going on with diet? Compressing an entire decade’s worth of research into two words, the key phrase seems to be “set point” (which, credit to Taubes, he was one of the first people to popularize). See eg Anorexia And Metabolic Set Point and Del Giudice On The Self-Starvation Cycle. But what is the set point and how does it get dysregulated? See my book review of The Hungry Brain for the best answer to that I have now (not so good). This whole mess helped me get a better understanding of Contrarians, Crackpots, and Consensus, and eventually ended up with me Learning To Love Scientific Consensus.
In terms of x-risk: I started out this decade concerned about The Great Filter. After thinking about it more, I advised readers Don’t Fear The Filter. I think that advice was later proven right in Sandler, Drexler, and Ord’s paper on the Fermi Paradox, to the point where now people protest to me that nobody ever really believed it was a problem. AI has been the opposite – I feel like the decade began with people pooh-poohing it, my AI Researchers On AI Risk was part of a large-scale effort to turn the tide, and now it’s more widely accepted as an important concern. At the same time, the triumphs of deep learning has made things look a little different – see How Does Recent AI Progress Affect The Bostromian Paradigm? and Reframing Superintelligence – and I’ll be reviewing Human Compatible soon. I also got some really great insights on what “human-level intelligence” means from the good people at AI Impacts, which I wrote up as first Where The Falling Einstein Meets The Rising Mouse and later Neurons And Intelligence: A Bird-Brained Perspective (see also Cortical Neuron Number Matches Intuitive Perceptions Of Moral Value Across Animals and all the retractions and meta-retractions thereof). Overall I think I’ve updated a little (though not completely) towards non-singleton scenarios and not-super-fast takeoffs, which combined with the increased amount of effort being put into this area is cause for a little more optimism than I had in 2010. I know some smart people disagree with me on this.
In the 2000s, people debated Kurzweil’s thesis that scientific progress was speeding up superexponentially. By the mid-2010s, the debate shifted to whether progress was actually slowing down. In Promising The Moon, I wrote about my skepticism that technological progress is declining. A group of people including Patrick Collison and Tyler Cowen have since worked to strengthen the case that it is; in 2018 I wrote Is Science Slowing Down?, and late last year I conceded the point. Paul Christiano helped me synthesize the Kurzweillian and anti-Kurzweillian perspectives into 1960: The Year The Singularity Was Cancelled.
In 2017, I synthesized some thoughts that had been bouncing around about rising prices into Considerations On Cost Disease, still one of this blog’s most popular posts. I felt like early responses were pretty weak, although they brought up a few interesting points on veterinary medicine, cosmetic medicine, and other outliers that I still need to transform into a blog post; Alon Levy’s work on infrastructure in particular has also been great. The first would-be-general-answer that made me sit up and take notice was Alex Tabarrok’s book (link goes to my review) The Prices Are Too Damn High – but I explain there why I don’t think it can be the full answer. The most recent thing I learned (tragically underhighlighted in my wage stagnation post) is that a lot of apparent wage stagnation is due to cost disease – consumer services ballooning in cost means the consumer inflation index rises faster than the business inflation index, productivity gets measured by business inflation, wages get measured by consumer inflation, and so it looks like productivity is outpacing wages. This is still only half of the apparent decoupling, but it’s still a big deal.
The highlight/lowlight of the decade in social science was surely the replication crisis. My first inkling that something like this might exist was in December 2009, from the Less Wrong post Parapsychology: The Control Group For Science. There were a couple of years where people were trying to figure out how bad the damage was; of these, my 90% Of All Claims About Problems With Medical Studies Are Wrong was more optimistic, and my slightly later The Control Group Is Out Of Control was more pessimistic (I still stand by both). As the decade continued, I think we got better about realizing that many to most older studies were wrong, in a way that didn’t make us feel like total Cartesian skeptics or like we were going to have to throw out evolution or aspirin or any of the things on really sound footing. After that it just became fun: my “acceptance” stage of grief produced some gems like 5-HTTLPR: A Pointed Review.
On SSC, I particularly examined some of the replication issues of growth mindset. I started in 2015 by pointing out that the studies seemed literally unbelievable, but so far nobody had tried attacking them. I claim to have been way ahead of the curve on this one – if you don’t believe me, just read the kind of pushback I got. But by 2017, that situation had changed – Buzzfeed posted an article that called the field into question, but still without clear negative evidence. Finally, over the past few years, the negative studies have come pouring in, accented by supposedly “positive” studies by Dweck & co showing effect sizes only a tiny fraction of what they had originally claimed. The latest research (can’t find it right now) is that praising students for effort rather than for ability has no effect on how hard-working or successful they are, debunking the original headline result that got most people interested in the field and nicely closing the circle.
In 2010 I worked with a medical school professor who studied the placebo effect and realized I didn’t understand it at all. Over the past few years I gradually became more convinced of the heterodox position of Gøtzsche and Hróbjartsson, who believe placebo effect doesn’t apply to anything except pain and a few other purely mental phenomena (The Placebo Singers, Powerless Placebos). I’ve since become less convinced that’s true (just today I treated a patient who I’m pretty sure has psychosomatic vomiting from what he falsely believes was a medication side effect, and if belief can cause vomiting, surely it can also treat it). As with so many other things, it was predictive processing to the rescue – see section IV part 7 of my Surfing Uncertainty review. I now think I have a pretty good understanding of how placebos can treat both purely mental conditions and conditions heavily regulated by the nervous system, while still mostly sticking to Gøtzsche and Hróbjartsson’s findings.
I started this decade confused about how to understand ethics given all the paradoxes of utilitarianism. I’m still 90% as confused now as I was then, but I still feel like I’ve made some progress. A lot of my early thinking involved folk decision theory and contractualism – how would you act if you expected everyone else to act the same way? I explored the edges of this idea in You Kant Dismiss Universalizability and Invisible Nation. I’m not how much it helped my search for metaethical grounding, but it helped me get a more robust understanding of liberalism and clarify my views on some practical questions, eg Be Nice, At Least Until You Can Coordinate Meanness and The Dark Rule Utilitarian Argument For Science Piracy. In general I think this has given me a more cautious theory of decision-making that’s occasionally (and terrifyingly) set me against other more anti-Outside-View rationalists. I think the most important shift in my understanding of ethics this decade was the one I wrote up in Axiology, Morality, Law (formerly titled “Contra Askell On Moral Offsets”), which isn’t related to grounding utilitarianism at all but sure helps make the problem less urgent
Despite my better judgment, I waded into politics a lot this decade. I Can Tolerate Anything Except The Outgroup produced this blog’s first “big break”, but it admitted it didn’t really understand the factors underlying “tribe”. Since then Albion’s Seed helped provide another piece of the puzzle, and a better understanding of class provided another. I went a little further discussing why tribes have ideologies associated with them in The Ideology Is Not The Movement, how that is like/unlike religion in Is Everything A Religion?, and hammered it home unsubtle-ly in Gay Rites Are Civil Rites.
I wrote the Non-Libertarian FAQ sometime around 2012 and last updated it in 2017. Sometime, possibly between those dates, I read David Friedman’s A Positive Account Of Property Rights, definitely among the most important essays I’ve ever read, and got gold-pilled (is that a term? It should be a term). I’ve since been trying to sort this out with things like A Left-Libertarian Manifesto, and trying to move them up a level as Archipelago. James Scott’s Seeing Like A State and David Friedman’s Legal Systems Very Different From Ours were also big influences here. Like all platitudes, “government is a hallucination in the mind of the governed” is easy to understand on a shallow level but fiendlishly complicated on a deep level, but I feel like all of these sources have given me a deep understanding of exactly how it’s true.
The rightists (especially Moldbug) get the other half of the credit for helping me understand Archipelago, and also deserve kudos for teaching me about cultural evolution. My first attempts to engage with this topic were nervous and halting – see eg The Argument From Cultural Evolution. I got a much better feel for this after reading The Secret Of Our Success, and was able to bring this train of thought back to its right-wing roots Addendum To Enormous Nutshell: Competing Selectors. I’m grateful to the many rightists who argued about some of these points with me until they finally stuck.
I had more trouble engaging with leftists. I started with Does Class Warfare Have A Free-Rider Problem, and it took me way too long to figure out that this was one of the major questions sociology was asking, and that “an answer” would look less like “your game theory analogy is missing this one variable” and more like a whole library full of books on what the heck society was. Later the same engagement produced Conflict Vs. Mistake, which I am informed is still unfair and partially inaccurate, but which (take my word for it) is a heck of a lot better than the stuff I was thinking before I wrote it. More recently I’ve been trying to figure out a sympathetic account of activism (as opposed to the unsympathetic account that it’s virtue signaling and/or people who are really bad at figuring out what things are vs. aren’t effective). You can sketch the outline at Respectability Cascades and Social Censorship: The First Offender Model, and I’ll sketch the whole thing out sometime when I have enough emotional energy to deal with the kind of people who will have opinions on it.
I also had to grapple with the sudden rise of social justice ideology. I’m proud of my work on gender differences – both what I learned, how I wrote it up, and the few bits of original research I did (eg Sexual Harassment Levels By Field). My knowledge and claims started off kind of weak (Gender Differences Are Mostly Not Due To Offensive Attitudes), but I eventually feel like I got a really great evidence-based basically-airtight theory of what is going on with gender imbalances in different fields, which I posted most of in Contra Grant On Exaggerated Differences (I’m still thankful for the commenter who solved that one remaining paradox about math majors). And despite all the mobs and vitriol I think sound science has basically triumphed here – I was delighted to recently see as mainstream a blog as Marginal Revolution recently publish, without any caveats or double-talk, a post called Sex Differences In Personality Are Large And Important and get basically no pushback. I was a lot more pessimistic around 2017 or so and described some thoughts on how to make a strategic retreat in Kolmogorov Complicity And The Parable Of Lightning, which I still think is relevant in some areas. But I actually start the new decade really optimistic – I haven’t written up an explanation of why, but careful readers of New Atheism: The Godlessness That Failed may be able to figure it out, especially if they apply some of the same metrics I used there to track how social justice terms have been doing recently.
Upstream of politics, I think I got a better understanding of…game theory? Complex system dynamics? The most important post here was Meditations On Moloch; the sequel/expansion, whose thesis I have yet to write up in clear prose, is The Goddess Of Everything Else. Reading Inadequate Equilibria was also helpful here.
My understanding of “enlightenment” went from total mystical confusion to feeling like I have a pretty good idea what claims are being made, and mostly believing them. This line of thinking started with the Mastering The Core Teachings Of The Buddha review, and then was genuinely helped by Vinay Gupta’s contributions summed up in Gupta On Enlightenment, despite the disaster in the comments of that post. From there I progressed to reading The Mind Illuminated, and Is Enlightenment Compatible With Sex Scandals led me to discover The PNSE Paper, which as much as anything else helped ground my thinking here (the comments there were pretty good too).
And thanks to all of you who took the survey, I went from skepticism of birth order effects to saying Fight Me, Psychologists: Birth Order Effects Exist And Are Very Strong. This was bolstered by Eli Tyre and Bucky’s posts on Less Wrong about birth order in mathematicians and physicists respectively. Last year I expanded on that with a post on how birth order responded to age gaps (somewhat updated and modified here, thanks Bucky). Once this year’s survey results are in I expect to have a lot more data on exactly what causes birth order effects and maybe how to deal with them. If you haven’t taken the SSC survey this year, consider this your reminder to do it here.
Not many of these were total 180 degree flips in my position (though birth order, preschool, psychedelic insight, and the rate of scientific progress are close). And not many of them completely resolved a big question that had been bothering me before (though the Fermi Paradox paper, omnigenic model, and animal neuron work did). A few of them confirmed things I had only suspected before (growth mindset, gender imbalances, diet). Many of them feel like what MIRI calls “deconfusion”, turning a space full of unknown unknowns to one where you feel like you have a decent map of where the major problems are and what it would feel like to solve them. The enlightenment research seems to fit here – I went from “I have no idea how to even think about this question or whether it’s all fake” to “I don’t know exactly what’s going on here, but I know what needs to be explained, and it looks like the explanation will have a shape that fits nicely into the rest of my ontology.”
There’s an argument that I should learn less each decade, since I’ll be picking higher and higher fruit. My own knowledge can advance either because civilization advances and I hear about it, or because I absorb/integrate older knowledge that I hadn’t noticed before. Civilization advances at a decade per decade (or maybe less; see the Cowen & Southwood paper above), but each year it becomes harder and harder to find relevant older knowledge that I haven’t integrated yet. I plausibly only have five more decades to live, and I don’t think I’d be happy only advancing five times this amount over the rest of my life, let alone less than that.
But I notice I only started SSC about halfway through the decade, and that my progress picked up a lot after that. I don’t think it’s just recall bias from being able to track myself better. I think being able to put ideas out there and have you guys comment on them and link me to important resources I might have missed has been great for me. I only started taking full advantage of that around 2015; this decade I have a head start. And maybe I’ll discover other useful tools that will speed things up further.
Thanks for sticking around with this blog, and have a happy third decade of the twenty-first century.
Great summary! In my opinion, it is a good introductory post to show someone, when they ask “What’s this blog about?”.
I made a similar journey this decade regarding AI and Fermi Paradox. My opinions on AI changed from classic naive techno-optimism to a standard view of “We better get this right the first time, or else”. With Fermi paradox, however, I still feel there’s a major question left unanswered, but now it’s more clearly defined. Isaac Arthur helped me here, with his excellent video series diving deep into each of the proposed solutions, classifying them and comparing them. When I started, I had the same thought everyone had: “Well, the universe is big, and the speed of light is slow, so…”. I did some simple calculations on my own, and putting things in terms of precise numbers helped to clarify thinking greatly – I’m talking about sizes of galaxies, age of the universe, when and how fast did life appear on Earth.
Nowadays, I’m very confident in saying there are no type-3 civilizations in our galactic supercluster (We don’t see any megastructures, and we haven’t been colonized already), however I’m still extremely curious to know, why the whole universe isn’t teeming with simple life, and if it does, why no one started colonizing the galaxy already. My probability distribution mass these days rests somewhere near “simple life is common in the universe; multicellular eucariotic life, or its analogues, takes a very long time to evolve, and we were the first”, however I feel like I’m not wholly satisfied with this answer, and there are still major discoveries in astrobiology to be made.
Have you read Galaxy Simulations Offer a New Solution to the Fermi Paradox?
The TL;DR of the article is that interstellar travel is hard and therefore dependent on comparatively short trips between star systems, and the motions of star systems bring them closer together or farther apart over long time periods, preventing or promoting actual travel by both machines and potentially by life forms.
But the fact that no interstellar visitors are here now—what Hart called “Fact A”—does not mean they do not exist, the authors say. While some civilizations might expand and become interstellar, not all of them last forever. On top of that, not every star is a choice destination, and not every planet is habitable. There’s also what Frank calls “the Aurora effect,” after Kim Stanley Robinson’s novel Aurora, in which settlers arrive at a habitable planet on which they nonetheless cannot survive.
I’m not completely sold on this, but I have always felt that the difficulty of interstellar travel has been underestimated by most “solutions.” A lot wishful hand-waving around replicating probes on the machine side and cryogenics on the meat side, for example. It may be key that there are few if any actually Earthlike planets near us (10 ly or so) at this point in the life of the galaxy.
Yes, I’m familiar with the original paper here. I don’t find it plausible that habitable planets are an essential component of a colonization wave. What’s preventing aliens from using planets as a resource to build space habitats? In fact, I would expect an interstellar civilization to not care about planets much at the beginning of colonization, instead preferring to stay on the outer edge of a gravity well, harvesting asteroids and building solar panels. Once a civilization completes a Dyson swarm around its home star, there is nothing keeping it from expanding to the closest neighboring stars and transforming them too, and that expansion would be easily visible from outside at the current level of technology, from as far away as the edges of the Local Supercluster of galaxies.
Basically, we see that the space is full of resources that no one is picking up.
My personal theorem is that interstellar colonization has no economic payoff, because you cant economically ship anything interstellar, and thus, civilizations sane enough to endure just do not do it. Civilizations that value “MAKE MORE SPAWN!” as a terminal end highly enough to ignore the extreme costs and zero return… well, death.
The return is the more spawn. Your model of “economic payoff” seems to be “production of consumer goods for a sedentary population”, which I think is too narrow to be applied to all possible civilizations.
And moot, because the civilizations which do follow that model of economics and can’t make interstellar colonization work, will instead build Dyson shells full of Economic Payoff, which will be almost as visible as interstellar colonizers and yet we don’t see them. Yes, we’ve looked.
It seems like an interesting and plausible line of thought that societies/species that are strongly inclined toward expansion to the stars would also be strongly inclined toward self-destruction in wars for resources. I’m not sure where we would look to test this idea, though.
“but I have always felt that the difficulty of interstellar travel has been underestimated by most “solutions.””
Definitely. If we will ever see a warpdrive, the evidence for that sure is hiding well. But another point I never see anyone make, is that there is no empirical precedent for matter moving coherently through space at more than say a 1000 of km per second. And Any interstellar travel that actually exploits time dilation to a significant effect, also blueshifts the crap out of any radiation or matter in your path, and any structure would just evaporate. Note that what we call a ‘fast neutron’ has a relative velocity of about 5% the speed of light. Most transmutation of elements happens at far lower speeds. Note that Andromeda could have been teeming with intelligent life at pretty much any point, but if they launched themselves with 10% of the speed of light towards us, we still would not be anywhere close to hearing about it. And if they did, what would arrive would be a weird transmuted bullet of baryons.
So we can restrict ourselves to the milky way for all intents and purposes; which by the way, restricting yourself to more realistic speeds, is kinda massive too. It takes 10 million years to get to the center of it, at some of the highest empirically observed velocities for coherent structures of matter (note; stars are much less fussy about interstellar bombardment than space craft, being big balls of evaporating transmuting elements in the first place, so this is being very generous).
So yeah. Interstellar travel is much harder than it is often made out to be. Other galaxies will never be more than pretty pictures to us; and the number of potential planets that could have physically reached us in a way that does not defy established physics, is also a number that may still be big, but also decidedly finite.
To think we are simply in the earlier cohort in our part of the galaxy, isnt that mind bending a probability id say.
You seem to be suggesting that any interstellar probe or vehicle would have its structure transmuted by, what exactly? Free neutrons have a half-life of about ten minutes, and so aren’t found in the interstellar medium. Protons, and heavy nuclei including protons and neutrons, are much less effective at transmutation as their own positive charge prevents them from coming strong-force close to other nuclei. They can break up molecules and ionize atoms, but A: the density of the interstellar medium isn’t high enough for this to be a showstopper, and B: being charged, they can be electrostatically or electromagnetically deflected away from anything important.
There are clearly difficulties to interstellar travel, but baryonic transmutation is not one of them. Or possibly I am misunderstanding you.
My primary point is that there is no empirical precedent for coherent travel of matter at more than 1000km/s. That may well be for good reasons; it seems like something to consider. We have not really been in the interstellar medium so some skepticism as to what it is like is warranted; plenty of surprises were found just poking out of our provincial solar system.
Note that at 1000km/s, and one 0.5mu dust particle per 1M m3, youd be hitting one such dust particle, per second, per m2 frontal area. Not sure about how well your hypotethical electrostatic deflector would work on hydrogen in practice, but im pretty sure itd do shit against a 0.5mu particle. Which has enough energy to straight up vaporize a few OOM its own volume in solid matter; no matter if diamond or unobtaininum. Oh, and you got to keep that up for thousands of years straight, assuming the very closest stars have anything of interest going on with them. And youd better hope 0.5mm particles arnt more common than our models presume.
What, not including anything from your old blog? 😛 That’s like three whole years you’re missing! 🙂 There’s good stuff in there, too, some of it on these same themes…
My opinion on thing you’re still most frustratingly wrong about: Still taking a left-right political spectrum seriously. 🙂 (I’d say, using axes at all is a bad sign…)
One way of thinking about it is that if you did principal component analysis on the dataset of everyone’s political opinions, you’d get a list of directions in opinion space ranked in order of how much of the variance they explain. We call the first one (the direction along which the data varies the most) “the left right spectrum”.
Another way of thinking about it is that because of the median voter theorem, or forces similar to that, you generally have two massive coalitions vying for power. We call them “left” and “right”.
Neither characterisation says which direction/coalition is left and which is right, but we impose continuity back to the French revolution and bob’s your uncle.
You can add more directions/factions and get more accurate each time, but that is not inconsistent with the first direction/two major coalitions being the most important ones.
Also add that people’s real life actions and way of living is often completely and totally inconsistent with (and even perhaps the total opposite of) their strongly held political beliefs.
For example – I had a friend who is a rabid Neil Bortz type libertarian who was taking advantage of long term unemployment payments as well accepting the full and total package of all the in-kind or additional government benefits such as rent/utilities assistance, medical benefits, and the like and his dream job was a secure government job… I teased him about his loud incessant railing being totally inconsistent with how he lived – and predictably he got quite angry with me for pointing this out because he saw absolutely no contradiction….
And likewise most of my “Liberal” friends live far more conservative lives (both socially and fiscally) than most of my “conservative” friends… So go figure… People are weird.
There’s no contradiction between using programs that exist while saying they shouldn’t exist… I’d LOVE to have the money I dumped into Social Security (which was all spent on donating uranium to North Korea and bailing out upper-class twits from banks to the Saudi Monarchy).
But unfortunately my not taking the checks when I retire won’t make any programs disappear.
Social stuff is the same… I write pro-drug-legalization articles, but I’ve never used any illegal drugs. (I WOULD use illegal lentiviral vectors with inducible OSK Yamanaka factors, if I could get them 😉
The issue arose for me when I got an NSF fellowship to graduate school. I raised the ethical question with my father, and he told me he would be happy to credit me with a suitable share of the taxes he had paid.
My favorite statement on the subject is by Manny in The Moon is a Harsh Mistress:
“Do business with the Authority. Do business with the law of gravity, too.”
It was literally cognitive dissonance.
My buddy literally could not connect the dots between the help he received and the program that administered that same aid which he railed against because that would challenge his position that it was 100% evil…
The trouble is that this creates blind spots for us. Manipulative and ambitious people work diligently to cultivate this sort of behavior in us because it gives them opportunities for wealth, power, and glory at our expense….
For example – many people selling gold beat the “Panic!! The sky is falling, they are out to get us” drum during the recovery from The Great Recession because the price of gold was declining… They were trying to unload it on anybody they could scare into buying it – and it worked… A lot of paranoid people paid a lot of money to take their investments out of stocks (which were roaring up) and put it into gold (which was declining).
This seems like the dual of folks who say, “You want higher taxes? You know that you can just write a check to the Treasury, right?” There are precious few individuals who will disadvantage themselves personally in order to make a token statement that they take their political beliefs (that aren’t law) really extra seriously.
Generally, a belief has to be more like an individual moral belief than a generalized political belief about optimal systems in order to rise to the level of persuading a significant number of folks. An example might be something like abortion. There are probably folks out there who are politically/morally against abortion, yet decide to have one… but there is probably a higher percentage that decide to go ahead and carry the baby to term (in situations where they think it would be disadvantageous to them personally) because of their political/moral beliefs than people who pay additional taxes or refuse to take otherwise socially-acceptable subsidies.
Indeed. Ayn Rand took government assistance late in life.
I always figured the universe of possible positions is R^n, where n is the number of total issues, and then as doubleunplussed says you can do principal component analysis.
Political compasses use the second component for up-down, at least theoretically (though it may be just a component they want to talk about). The globalist-nationalist axis has become a lot more important.
More than two axes and you can’t make a nice picture anymore. But I wonder if anyone’s actually tried to do PCA on the political spectrum.
It think there’s an SSC post on political PCA somewhere in the archives.
Probably more like R^(n-squared) once you figure in all the people who virtue signal, play possum, rent seeking, use the cause as a cloak for some other objective, and join a cause as a way to take advantage of what they perceive as gullible people inside or weakness in the leadership….
😉 😉
Just bite the bullet and use R^infinity (i.e. l^2).
Well, the old blog’s been memory-holed for some reason, which would make it a bit awkward to link to :-/
Great, now I want to reread everything …
What is your current thinking on implicit association tests? In my memory, you wrote a piece making fun of people who were skeptical that they had any value. Later pieces seemed to just go along with the assumption that IATs are in fact worthless, with no comment on how things might have changed since the earlier spirited defense.
I can’t find that earlier piece now; my method was to do a google search of your domain for “implicit association” thingspace since I remember that piece talking about how an IAT was a device for measuring the distance between two concepts in thingspace. But perhaps someone else can find it.
This is the post you’re thinking of, written in 2009 as part of a series of posts:
“I think the IAT is about much more than who is or isn’t racist. The IAT is a tool for measuring distances in thingspace.”
Here is confirmation in 2013 that he had a positive view of IAT: “Still, I’m glad people are finally realizing the what I’ve been saying for years, which is that the IAT is really powerful and needs to be used for something other than nebulous social justice projects. I bet if the CIA created a (self/other + patriotic American/Russian double agent) Implicit Association Test it would totally work.”
Here is where his opinion might have started to change, in 2016:
“Implicit association tests probably don’t work (1, 2, 3, 4). That is, people who have “implicit racial biases” according to the tests are not more racist in everyday life than people who don’t. If this were true – and if it reflected a general failure of implicit racial biases to affect explicit actions – it’s hard to overestimate how much it would change psychology.”
And this seems like a pretty well fleshed out theory of what IATs are actually measuring:
I continue to endorse this. I think Implicit Association Tests are a really cool idea that could eventually be used for good, and it’s just unfortunate that in this world, they’re associated with “let me prove to you that all white people are racists” in a way that doesn’t make sense and isn’t supported by the studies.
My guess is that there are some ways that this effect sort of looks like or could be classified as racism, but it’s at deep emotional parts of the brain pretty far away from anything that controls action or even explicit preferences.
I think they were also linked to some businesses/consultancies whose connection to any scientific value of IATs is extremely sketchy. IATs are perfect rational astrology–they would fit an institutional need if they worked, so people want to use them and may not pay close attention to whether or not they work.
In my mid-30s, I began to question more of my assumptions. I don’t know if this is typical but, for me, in a military career, this made me an outlier (even in Military Intelligence). Twenty-five years later, I’m still doing it (much to the consternation of those in my current post-military career) but with sharper arguments and a better ability to understand those that disagree with me. SSC and other sources have helped me improve my analytical thinking by Scott’s reasoned arguments and his ability to integrate new information into those same arguments. Reading the comments is often excruciating but is now the only comments section on the internet I can find worth reading.
Additionally, having a close relative go through the demeaning process of misdiagnosis (repeatedly) and the over-prescribing of various SSRIs have given me yet another reason to look deeply into my own and society’s assumptions about mental illness and personality disorders. This blog is a treasure in that regard alone.
I’ve no claim to the sort of in-depth examination and increased understanding Scott has achieved over the past decade but that’s not the point for me – the very idea that there’s a place online where a reasoned (mostly) discussion of basic assumptions can occur without the hate-spewing vitriol is an achievement worthy of note. Let’s all hope it continues.
Quick thoughts:
Here and elsewhere you seem overconfident in predictive processing. I’m not sure what epistemic status you would formally give it, but I don’t think it should be stronger than “Interesting idea which could possibly be true.” But in going on to consider psychedelics, mental health, child development and placebos, you write as if predictive processing were more or less true. My sense is that this is something of a general pattern: you make sensible caveats in your epistemic statuses but don’t seem to internalise them fully.
“Psychedelic insight” requires better definition. There’s a probably true but not very interesting claim and an interesting but probably false claim which are being equivocated.
Look forward to the further post on SSRIs, since it’s an area where my lived experience (SSRIs are amazing) contradicts the evidence that SSRIs are only marginally better than placebo, and this causes me cognitive dissonance. If you could validate my experiences, that would be great, thanks.
If “set point” is the key phrase of the past decade’s research into nutrition, I fear it may have been a waste of time. Fortunately, I think this is basically a pop-science argument. There’s a disturbing tendency for rationalists to have heterodox beliefs about diet. I’m not sure whether this is a systematic fault in rationalism (a tendancy to fill gaps in our knowledge with speculation? a fondness for clever explanations?) or whether it’s just a California thing.
If you were convinced by the Fermi Paradox in the first place, it should have been because you believed every one of the factors in the Drake equation had a non-zero lower bound. In that case Sandberg, Drexler and Ord’s paper should have had no effect on your belief. If people were concerned about the Great Filter because they believed the product of expectations was the expectation of the product, that’s a bit depressing.
It seems the last couple of decades (or more) has seen real terms growth in values across a variety of asset classes (property, stocks, bonds, etc). On the face of it, it makes no sense for the price of everything to go up in real terms, but it occurred to me that “real terms” actually means “adjusted for consumer inflation” so perhaps what has really happened is that consumer goods and services have consistently got cheaper. That seemed plausible enough, and in that case I can be optimistic that asset prices will continue to increase. However, there is at least some tension with the suggestion here that consumer inflation is high because of the ballooning cost of consumer services. As an aside, how convinced are we that consumer services are correctly incorporated into inflation indices?
My sense is that “cultural evolution” is equivocating between the true but boring claim that cultures evolve in the loose sense of the word and the interesting but false claim that this is analogous to biological evolution. There’s no replicator.
On enlightenment, I’m convinced that people experience long-lasting states of dysphoric depersonalisation. I’m unclear what else is being claimed, but I’m pretty sure that something else is being claimed. I doubt it’s true that people in this state form more accurate beliefs than people with normal consciousness or that they act more ethically, which seem like they would be the interesting claims. I’m deeply unimpressed by the reluctance of meditation practitioners to state clearly whether or not they believe these claims. I also think this is a weird California thing.
My suspicion with the rise in asset prices is you have a lot of rich people with a lot of money (thanks to rising wealth inequality) and no place to put it, so they bid the prices for the same assets up.
On enlightenment, I’m convinced that people experience long-lasting states of dysphoric depersonalisation. I’m unclear what else is being claimed —
Long-lasting states of non-dysphoric depersonalisation? One of the key aspects of meditative practices is the goal of relieving suffering or even ending it permanently. Most advanced meditators I’m aware of emphasize this part quite a lot. Not sure where the dysphoric part comes in here, unless you’re referring to the variable and apparently relatively uncommon phenomenon called Dark Night, or depressive / anxiety disorders in general.
On the claims which seem interesting, I doubt meditators act better than people otherwise equipped with ways to regulate their behavior. Meditative phenomena such as mindfulness are, as far as I’m aware of, useful in relieving symptoms of borderline personality disorder, for example. I don’t think so-called enlightenment necessarily brings anything new to the equation, and I agree that there are more ways to be mindful than to mindfully meditate.
As an intermediate meditator, I’d say that the most interesting and probably true claim related to meditation is that people carry on living normally while feeling less bad about it.
This was an unfortunate typo. Should have read “euphoric”.
Thanks for clearing that up!
Have had both euphoric and dysphoric depersonalization states, mostly fairly brief (at most a week?) and triggered usually by meditation insight.
In both cases, there is no ‘free will.’ Things happen, and expectations forming is at a minimum.
However, the euphoric states are characterized by a deep sense of calm and ‘flow.’
The dysphoric states are characterized by a nihilistic malaise and a feeling that it would be ‘preferable’ to have more agency/control, but not the sense of self to actively desire it in a wanting sense.
In both the euphoric and dysphoric states I am much more productive than usual (less easily distracted by repetitive hedon seeking behavior), but in the euphoric state am more likely to do spontaneous/high energy (‘fun’) things like dance/sing.
What would it mean to believe that you have had an insight but are wrong? Insight is definitionally a change in belief; the believer function of the brain is the only metric as to whether or not an insight has occurred. If you believe you’ve had an insight, you have.
Insight is a change from a false to a less false belief accompanied by a profound sense that ‘this belief is important, and there is a clarity to the means of arriving at the new conclusion’.
If the conclusion is wrong, you *feel* like you have an insight, but are wrong about that.
I can get behind that definition. But for practical purposes, it’s unlikely that you’ll ever prove that someone did not have an insight, given that fundamentally the only way we have to perceive the brain of a believer is filtered through their belief. As I understand it, most people don’t define their enlightenment as “I suddenly knew truths I didn’t before,” but rather “I started to perceive the world differently, in such a way that the truth was easier to observe.” Which may be a distinction without a difference.
It seems to me that the internal experience of having an insight is that things that formerly seemed confusing or random or unconnected “click” into place; you have a better model of the world that makes things make more sense.
That can be wrong. In fact, it usually is wrong at least in the “all models lie, some models are useful” sense. The best we can do is to test our insights to see if they keep making sense as we examine new information.
@albatross11 Paranoiacs provide a helpful illustration of the problem here: they frequently (in my limited observation) have the experience you describe where they suddenly grasp the link between seemingly unconnected things, and then think that if you don’t understand that the television is spying on you (say), it’s because you haven’t yet had this moment of insight.
The adtech industry seems to exist largely to validate the previously-implausible paranoid fantasies of crazy people…..
There is no inconsistency in the real price of stocks going up, any more than there is an inconsistency in real wages rising as they have, enormously, over the past few centuries. If individuals produce more, real wages go up. If firms produce more, the value of the firms goes up.
Hasn’t productivity growth been notoriously slow for ages now?
I’m not sure this is quite right, because wages are not analagous to valuations. If a firm produces more, its value goes up. If all firms produce more, I think the return to capital should have gone up and the values should be unchanged. If the value of all firms goes up, it should be profitable for me to start a new firm, so new firms get created until the value returns to equilibrium.
But I think we’re not fundamentally disagreeing in that we agree that a “real” increase in X means that X buys more consumer goods and services (and not that X buys more of itself). Given that wages, stock prices, land prices and even gold have increased in real terms over the past century, it seems to me that it may be more sensible to say that consumer goods and services have become cheaper. What I’m worrying about is how that sits with Scott’s concern that consumer services have become more expensive.
“Here and elsewhere you seem overconfident in predictive processing.”
I think the situation with predictive processing is similar to the situation with Bayesianism. IE I know there are mathematicians who debate Bayesianism by coming up with some kind of very complicated weird example and proving that some Bayesian equation handles it less well than some other equation, but when I talk about Bayesianism I just mean something like “your beliefs about things are/should be produced by combining priors you get from logic or domain-general rules with empirical evidence, and here are some examples of math about how you can do this to prove I’m not just making it up”. I think phrased that way it’s almost trivially true, but it’s a trivial truth that I didn’t think of / understand before, and that many people still don’t seem to have internalized.
With predictive processing, my boring overly-general trivial statement is harder to phrase. But it’s something like – “You know those experiments where if someone drinks white wine that’s been dyed red, they say it tastes like red wine? And you know how if you see a two faces/vase picture, you can kind of mentally switch it between interpretations by thinking really hard about faces or vases? And you know how studies show that someone will do a better job recognizing the jumbled/fuzzy word “nurse” if they’ve been primed (even subconsciously) with the word “doctor”? And you know the Stroop Effect? And you know [about 75% of other psychology experiments which have a vaguely similar structure]? All of those point to a model where perception is a combination of prior expectations plus actual sensory evidence.” Again, maybe this is too trivial to care about. But it’s something I didn’t explicitly think before, and once you explicitly think it, lots of things make much more sense (like the CBT framing of depression as “global negative bias”). So I don’t want to claim that everything is definitely computed via Kuebler-Leddick divergence or some strong claim like that, but I think there’s a framing of the idea that it seems to me almost inconceivable that it’s false, where it’s just giving a structure and a name to the way psychology looks at almost everything.
“If “set point” is the key phrase of the past decade’s research into nutrition, I fear it may have been a waste of time. Fortunately, I think this is basically a pop-science argument.”
I think maybe every reputable nutritional scientist believes something like this? Certainly Guyenet does, and he’s got a PhD in neuroscience and is one of the top authorities in the regulation of appetite. If I type the term into Google Scholar I get a few thousand articles about it in reputable journals, and that most people believe we’ve specifically figured out which part of the hypothalamus sets weight set point, which hormones communicate it, and what happens to rats when it’s disrupted in various ways. I’ve never heard any scientist deny weight set points, though I can’t prove that literally none of them do.
“There’s a disturbing tendency for rationalists to have heterodox beliefs about diet. I’m not sure whether this is a systematic fault in rationalism (a tendancy to fill gaps in our knowledge with speculation? a fondness for clever explanations?) or whether it’s just a California thing.”
I love how it never fails that when people criticize how heterodox and sloppy rationalists are, they point to them believing something bog standard that every reputable authority in the field agrees with.
“On enlightenment, I’m convinced that people experience long-lasting states of dysphoric depersonalisation. I’m unclear what else is being claimed, but I’m pretty sure that something else is being claimed. “
I think at the very least the claim is that in enlightenment it’s euphoric depersonalization.
Yes, I meant to say “euphoric”. Apologies!
On set points, what I mean by “a pop-science argument” is not that it’s contrary to consensus, but that it’s askew from what scientists research. People study weight regulation mechanisms without getting bogged down in a debate about whether those mechanisms can or cannot be accurately characterised as a set point. It’s pretty obvious that in some sense they can (because weight regain occurs) and in some sense they cannot (because secular weight change occurs), and then we’re just arguing about definitions. But I claim no expertise and may be wrong!
ETA: For example, the Chief Medical Officer, in her recent report Time to Solve Childhood Obesity, simply makes no reference to set points.
See also this:
Yes, I agree the set point model is incomplete, but I think the most interesting research program going on now is trying to figure out how and why and what the implications of that are.
To make a physics analogy, we know that our current theory of quantum mechanics is problematic because it doesn’t explain gravity, but having the theory at all, and being able to say things like “an explanation of gravity is what’s missing from this” is vastly better than not having this.
In the same way, “how come calorically dense processed food makes the set point shift, even though normally it should stay fixed?” seems to be the most important question in nutritional research right now, and you can’t even frame it unless you start with a set point theory.
I don’t want to get too bogged down on semantics, because I think we largely agree on the substance, but I would phrase the question as, “How come nutritionally dense processed food dysregulates bodyweight homeostasis in many but not all individuals?” I think largely the phenomenon is not well described by “set point shift”, because that would imply that that bodyweight shifts to a new equilibrium, whereas in practice many people experience a secular increase.
Completely change a person’s diet every week (from dense foods to light foods, nutritious to non-nutritious, and all permutations in between), or completely change their living/work conditions every week (from full sedentary to full moving, and all permutations in between from fast-twitch to slow-twitch muscle usage), and I hypothesize you’d find the person doesn’t have a set point.
What does “set point” get us that fullness-response, exercise-response, work-response (losing sight of hunger when deeply involved in work, becoming more hungry when working too long, etc…), stress-response, and the fact that people develop habits and tend to stick to them, doesn’t?
It seems to me akin to phlogiston or ether:
a modelan object that serves no real purpose. We can model the dynamics that actually occur without making up Quantum Crystal Harmonics Energies to do so.I couldn’t find any references to Kuebler-Leddick divergence. Do you mean Kullback-Leibler divergence?
This probably isn’t much help in terms of intellectual development, but here are my thoughts on willpower (something with which I believe I’ve had lots of experience).
I spent 23 years in the military. First as an Army paratrooper for three years, and then after college as a Navy SEAL officer for 20 years. One of the passages in military literature I came to love — and practically had tattooed to the back of my eyelids — was the following, from Marine Corps manual FMFM-1 (“Warfighting”):
And that is my working definition of willpower: persistent strength of mind and spirit.
I remember reading some study that supposedly determined there is no such things as willpower, and I had to shake my head. Anyone who’s been in a combat situation — or even just general military situations — knows how much success depends on raw willpower.
Even if it’s an abstraction of sorts, does a significant portion of the population actually not think there’s anything such as willpower? Seems like a click-baity title.
This is a great line and I appreciate your sharing it: “the greater requirement is to fight effectively within the medium of friction.” In my field of mental health, I think about this in terms of helping people accept that friction states are the norm, rather than wishing for or trying to find frictionless states.
I don’t really know what willpower is as distinct say from “conscientiousness” as it’s measured in the Big Five personality tests. To the extent that “executive function” is a separate measurable thing from conscientiousness, we could add that since the ability to plan and purposefully implement one’s “strength” is key too.
I don’t really know what spirit refers to as distinct from the mind in this context or how we would recognize the difference. The persistent part makes total sense to me and is why willpower seems to me more akin to a trait like conscientiousness than a state that one induces. There appears to be a lot of built-in human variation in conscientiousness. I haven’t looked lately at the research on how mutable the Big Five traits are over the life-span. I would imagine military training is as good a testing ground of that as we have, but then it may also mainly be a setting in which people who are less conscientious are eventually winnowed out. Maybe there’s even research about this?
A word like strength evokes the image of say muscle building, and so suggests that regular practice will naturally increase it. I’m not sure we have evidence that willpower or conscientiousness works that way.
I don’t remember Scott’s writing about willpower now, so am going to go back and read some more. This topic feels most relevant this time of year when a lot of my patients come in talking about goals for the new year.
I’ve always been perplexed by the way people (and the authors) frame this paper. It’s just a rehash of the great filter (but with distributions!), it doesn’t “dissolve” anything.
I suspect you’re misunderstanding the paper. The conclusion is that our known science gives at least a 39% chance of being alone in the observable universe. (The observation that we haven’t seen anyone else yet may boost this as high as 85%.)
The great filter (as I understand it) was motivated by thinking we shouldn’t be alone, so there’s some mysterious filter killing everyone off. The SDO paper says no, it makes sense that we’re alone.
If you want to redefine the great filter to refer to known science about how hard it is for intelligent life to form, you can do that, but it confuses the debate.
I’m pretty sure the great filter has been proposed to be behind and ahead of us (either some filter making it incredibly unlikely for life to evolve, which we managed to get past somehow (that usually presents life/civilizations from arising) or ahead of us (that is going to kill us off or at least destroy civilization).
Okay what evidence is there that either filter exists?
Oh, I’m not here to defend the concept. Just explaining that it is sometimes used to describe the difficulty of forming life.
Gotcha. My post was meant to be agnostic about the timing of such a filter.
Because of ________?
Because if you do the Fermi calculation correctly that’s what you get. The original calculation was very simplistic: just take your best guess at each term in the Drake equation, and see how many intelligent life forms you expect to see in the universe. Looking at wikipedia, this approach was high-variance, giving answers such as 10^-12 life forms or 10^7 life forms, depending on your inputs. The Fermi paradox (“where is everybody?”) which spawned the great filter idea (“they died!”) is based on assuming the expected number of life forms is closer to 10^7.
What the SDO paper does is account for our huge uncertainties by integrating over probability distributions instead of just picking single values. It turns out there’s a 39+% chance that the number of intelligent life forms in the universe is 1. Thus, there’s not much reason to wonder where everyone is, and not much reason to suspect an unknown great filter, either ahead of or behind us.
The great filter isn’t necessarily “they died”, it also encompasses “they didn’t develop” if the filter is very early on. The 39% probability of being alone is because the others have been filtered.
I agree, but as I said before I think that’s a different meaning of the great filter from popular use. The claims I’ve usually heard are that humanity will destroy itself, and that the lack of other intelligent life is evidence for that. SDO show that’s not true.
If you want to say known science describes a great filter for intelligent life, then sure I agree. But again I think that use of terminology confuses the debate.
Scott,
Thank you for running this blog. I know to someone in the Bay Area some of these may feel like codifying the lastest IRL conversations, but I know you’ve spent time in Michigan, and there’s pretty much literally no one I can have conversations with about the topics that you post on out here in the hinterlands.
I (and presumably the few like-minded folks out here) get my (our) fix from among precious few other things reading your words. It’s hard to communicate how alienating it is to realize you spend a substantial part of your time thinking about things literally no one else around you is thinking about (and it’s given me the first gut-level inkling of what other forms of alienation like being of the non-majority race/sexual orientation/religious background might be like), so thank you for reminding those of us in that boat that there are other ships afloat.
Where in the “hinterlands” are you? Are you in Michigan yourself?
I hope you can find an in-person community you feel more engaged with. I would encourage you to keep looking – the ‘hinterlands’ are not an intellectual wasteland (or at least, none of the hinterlands I’ve ever inhabited have been).
You’re welcome. I don’t know if you mean you’re also in Michigan, but if so there’s a pretty good rationalist meetup group in Ann Arbor.
… I feel like this is, in some sense, ironic. I wish this was the default reaction of people upon discovering “gray tribe” (whatever that means).
I was excited when I found this blog. Now it mostly just depresses me.
Indiana, so unfortunately no dice on the meetup (Ann Arbor is ~4.5 hrs, and I have kids and a job).
And it’s not an intellectual wasteland by any means: there’s plenty of first rate engineering, bio-informatics, computer science, etc. But nothing like this/less wrong/etc.
This post seems like a good opportunity to express some gratitude for this blog from a reader point of view. Over the past half-decade, your writing here has done more to shift my thinking than any other single source. I’ve had to rethink several perspectives as a result of excellent points that you brought up here, and for a number of others, this blog has helped clarify my thinking even if it hasn’t ultimately changed it.
So thanks for grappling with these topics in public!
I would also like to express my gratitude, but for a different reason. I don’t think Scott’s posts have changed my views very much on any issues, although they have certainly informed me of ideas I hadn’t thought about before.
But the main value of the blog, to me, is that it is the one place I know of online where I can participate in civil conversations with intelligent people covering a wide range of beliefs and backgrounds, Catholic to atheist, communist to anarcho-capitalist, plumber to rocket scientist.
+1
You’re welcome!
“Over the past half-decade, your writing here has done more to shift my thinking than any other single source”
I was going to say exactly this, so +1
Meditations on Moloch, Albion seed related posts, and Categories were made for men (and many other posts) come to mind has posts that had a huge impact on the way I see the world, so thank you very much.
Re: Behavior: The Control of Uncertainty
Gary Cziko was saying similar things in 2000’s The Things We Do: Using the Lessons of Bernard and Darwin to Understand the What, How, and Why of Our Behavior.
I have to repeat what WayUpState said, “the very idea that there’s a place online where a reasoned (mostly) discussion of basic assumptions can occur without the hate-spewing vitriol is an achievement worthy of note. Let’s all hope it continues.”
Ah, the fallibility of memory. Cziko was explicitly following William T. Powers, the author of Behavior: The Control of Perception, which was originally published in 1973.
Scott, you are a unique talent and I am immensely grateful for your existence and willingness to devote the energy to share your thoughts with the rest of the world. I’m sure there are plenty of people as smart or smarter, or as self-reflective, but few of them consistently generate new creative ideas in multiple areas while fostering a charitable community to engage with them.
You have persuaded me on some things, and I have ‘leveled up’ as a result of reading your work to the point where I can clearly and with nuance explain to myself why I disagree with parts of it. Thank you.
Also, as a data point re. placebo: I have allergies to pine nuts, and they are rare enough that I don’t bother checking for their presence, so if I suspect I ate them (maybe a green mystery sauce, pesto?) I get nauseous even if I didn’t, and if I did eat them and get bayesian evidence I probably didn’t, the nausea temporarily lessens significantly, until it rises again and I do throw up (and then immediately feel much better).
How do you think this intellectual progress changed life outcomes for you?
Not at all!
(except through helping me write this blog, which has been good networking in various ways)
Hi Scott,
I, too, came across predictive processing last decade and bought Clark’s book after reading your remarks on the idea. I really liked the analogies you drew in MENTAL MOUNTAINS, which expands the idea from neurotypicality and its limits to interventions!
But I hear you are still stuck on schizophrenia and autism (so am I), so I just wanted to remind you of something so your great brain can work through this and hopefully report back to all of us (who await with extreme interest): predictive processing (see Clark ch. 7) intersects with these particular atypicalities in surprising ways. Clark paints a picture of schizophrenics failing to attenuate self-generated sensory experiences but then being left with large sensory prediction errors they must explain away, which they do through adopting wild hypotheses (rather than reducing the weight assigned to self-generated sensations)- it’s a failure of detecting one’s agency and “adjusting” for it.
Clark then illustrates autism as a bundle of social and non-social symptoms; Clark argues the the non-social symptoms may stem from overweighting sensory prediction errors (seeing for the first time the kitchen table is not seeing a table in a kitchen [easily predicted and not newsworthy] but THE kitchen table [whoa!]).
Depending on how you look at it, people with schizophrenia or autism are experiencing hyper- or hypo-priors, but of course the prior isn’t visible. Rather, we are inferring this based on how they react to sensory signals- maybe they are more similar in that they are both privileging the sensory signal, but differ in how that weighting is achieved (reduced self-attenuation vs. hypo-prior). [and then my mind wanders over to SYMPTOM, CONDITION, CAUSE]
Cheers
I’m not sure I get the jump from “large sensory prediction errors” to “hyperpriors”, or from “overweighting sensory prediction errors” to “hypopriors”
For Clark, it’s about the relative weighting of the hypothesis of how the world is vs. incoming sensory information. You can drop the Bayesian language (he only brings it up occasionally). For schizophrenics, the idea is that they strongly weight incoming information, even when it deserves lower weight like sensations from self-tickling.
Clark cites Adams et al. giving a car warning example. You have a temperature warning light that warns a lot, so you believe your car is messed up and go to the mechanic (they don’t inspect your warning light but the car). They find no fault. How do you reconcile this? Is your car actually fine, is the warning light mistaken? Is the mechanic incompetent or even lying to you? Schizophrenics might have a faulty warning light, and you don’t say your sensations are wrong, you adopt the wild hypotheses needed to mitigate the prediction errors.
For autistics, they strongly weight incoming information (@Aapje, yes, a filtering issue). Both yield big prediction errors that force a reckoning. Schizophrenics end up adopting crazy revised hypotheses, which look to us like they aren’t being sensitive to reality, but they might actually be overly-sensitive. Autistics end up rolling out very specific revised hypotheses, which look to us like they are adopting highly context-specific views of the world with weak generalizability, and they might actually be overly-sensitive to sensory stimuli. Put another way, it might look like schizophrenics and autistics are opposites because one seems to be “sticking to their guns” and the other seems to be “swapping out their guns,” but it could also be they are similar because they both could be updating too much- over-privileging incoming information.
These are competing models for each, and it depends where in the adjustment process you think you are:
SCH1: strong weight on hypotheses, “they don’t respond to the evidence like us”
AUT1: weak weight on hypotheses, “they develop very specific schemata”
1: SCH and AUT differ in their relative weightings of hypotheses to info (e.g., SCH is H:I>1, AUT is H:I<1)
SCH2: strong weight on info, big prediction errors, end up changing to wild hypotheses to accommodate the error, but we think "they don't respond to the evidence like us" and infer they have a strong weight on hypotheses
AUT2: strong weight on info, big prediction errors, end up "they develop very specific schemata" and infer they have a weak weight on hypotheses
2: SCH and AUT might NOT differ that much in their relative weightings of hypotheses to info (e.g., SCH is H:I<1, AUT is H:I<1).
I don't have this figured out.
@slatestarreader
It seems more likely to me that autism is primarily a filtering issue.
Take this model: mental model -> prediction -> sensory input -> parsing/filtering/abstraction -> comparison to prediction & update model -> prediction -> etc
This fails if the filtering/abstraction step goes awry.
For example, this is a well-functioning person: mental model of how I can operate a car -> I can safely proceed by holding the steering wheel straight, pressing the gas pedal, etc -> see bird in the sky, see traffic light turning red ahead, see kid playing on sidewalk, feel texture of steering wheel in my hands, etc, etc -> red traffic light is relevant and needs to be incorporated in the model -> I will break the law/risk a crash if I keep doing what I’m doing, but will be fine if I brake in time, etc
A fairly mildly autistic person: mental model of how I can operate a car -> I can safely proceed by holding the steering wheel straight, pressing the gas pedal, etc -> see bird in the sky, see traffic light turning red ahead, see kid playing on sidewalk, feel texture of steering wheel in my hands, etc, etc -> the bird is relevant, the traffic light is relevant, the kid on the sidewalk is relevant, the texture of the steering wheel is relevant -> comparison to prediction is overwhelmed by irrelevant data.
Note that in this model, the filtering process is largely subconscious. For example, non-autistic people don’t have to explicitly recognize that the feeling of their clothes is irrelevant, but quite a few autistic people are very sensitive to clothing that gives strong sensory inputs, being unable to automatically filter that out.
As a mildly autistic person, I can confirm that filtering of the type you describe is something I have to do on a conscious level. This is useful in video games (where nearly every stimulus carries some relevant information), but exhausting in real life situations.
It took me a long time to understand that the NTs around me failing to notice “little things” wasn’t because they were all stupid or blind, it was because they’re also failing to notice literally thousands of irrelevant stimuli that I (unlike them) had sorted through one by one to find just one that turned out relevant.
Mild autism is also useful in situations where the devil is in the details (like programming).
—
Also, plot for a book:
An alien hive mind has tried integrating humans into their neural network, failing each time. Only once they try incorporating an autistic person do they succeed, as the neural network sends each node prefiltered stimuli, that need to be processed completely by the node. Regular humans refilter the already filtered information, removing relevant detail, making them unable to function as part of the neural network.
This plot could impress upon people how normal/high-functioning is dependent upon the environment.
Cherryh wrote a book in which hive minds incorporate “azi” (human clones programmed from birth to be servants), which seems a little like this.
This is a good time to say thank you very much for this project, and for allowing us to participate, witness or simply sit back and enjoy, according to our abilities and inclinations.
A very happy next decade now that we’re all in the SF future of the 21st century!
What Deiseach says.
And I’m hugely impressed at Scott’s feat of so systematically tracking and assessing his own intellectual development. Speaking of willpower.
You’re welcome. All of you people are going to feel so silly when I celebrate this blog’s 10th anniversary in a few years and you have to thank me all over again.
I just want to point out, Scott, that I admire both your intelligence and your conscientiousness / focus / whatever greatly. Your productivity has been something which has both baffled and inspired me, and you have inspired me to write, too. Thank you for carrying on with this project, looking forward to reading more of your work.
The same can be said for many commenters, here.
It’s pretty remarkable to see all this work summarized in one place. It would be impressive for any one person to accomplish this, but to accomplish it in one’s free time after a full time day job is pretty astonishing.
I second the commenter who said that s/he had literally no outlet for discussing these topics outside the context of the blog – that has been my experience too. I’ve brought up topics that originally came up here with my generally quite intelligent friends and family and have been met with a shrug or a shallow opinion on the topic without the desire to think things through at a deeper level. I think about “Against Murderism” ALL THE TIME, and nobody else seems to want to really think about these issues at all.
And the blog has changed my life in concrete ways too. I focus my charitable donations on EA causes that I hadn’t considered previously, and as a result of the adversarial collaboration on the ethics of eating meat, have dramatically scaled back my meat consumption to probably 20% of my previous value, and the meat I do eat now is almost exclusively fish and invertebrates.
You are the third or fourth person to say the adversarial collaboration on meat had that effect and I am so happy. Thanks for letting me know about all of this.
Hey, I was in that study!
I feel like I’ve had a lot of deconfusion due to this blog, and this decade brought me to the age of 53. Thanks for that! I do think there’s a clear slowdown with age, but there’s still tons to learn if you’re trying.
Here’s a question: What’s the best way to generate more … Scottness (at least as to process) in the world? The S[cott] Foundation for the Betterment of Everything? I’m in.
I feel like the usual run for very successful blogs like this is to end up at the Atlantic or something – short-form, punchy stuff is what sells, which isn’t this blog and please don’t make it that.
Anyway, thanks for helping improve the rest of us while improving yourself.
And a happy new decade to you too, Scott! 🙂
Don’t worry, no one else knows either. I’ve had a concept in my head that’s been begging expansion since 2017, and that I haven’t expanded because I don’t have either any funding or enough people yelling at me to write, where ASD, SZ/SSD, and borderline PD are the extreme points on a kind of “Neurodivergence Triangle” where presumably many (most?) individuals will be at less-categorical points inside it. BPD feels very opposite-of-autism in an intuitive kind of way, but I assume you’ve read the most-known study here already, even though it’s definitely not the be-all and end-all and a BPD-autism convergence is the part I get the most criticism of from relatively educated people.
In theory a lot of relatively educated people would criticize the schizo-autism convergence, but no one who’s ever met me does. Couldn’t guess why.My intuition is that any question formatted as “Are [neurodivergent, regardless of specific label] people unusually charismatic, unusually uncharismatic, or both?” is properly answered with “Both”. I haven’t yet encountered a label that doesn’t seem to just end up with absurd variance on charisma levels regardless of what its reputation is. (In theory histrionic PD might prove the exception from its reputation, but the reputation’s failed me every time so far, and I’d expect uncharismatic people who otherwise fit the HPD profile to be disproportionately diagnosed as something else.¹)
Enlightenment looks like an obvious absence in this list.
Also, while I’ve got you here, I’d like to say that while I enjoyed taking the 2020 SSC survey (and previous SSC surveys), I have some constructive criticism on a couple questions regarding conflation of “other answer” and “no answer”. I noticed this primarily in the “What religious belief/denomination do you follow?” question, which actively conflated “Mixed/other” with “No religion, but wants an option”, which is a pretty unfortunate conflation to make in a room that massively overrepresents atheists/agnostics and mildly overrepresents people who follow weird eccentric things, and with “How does melatonin work for you?”, where, again, “It does something other than these options” and “I’ve never tried it”, where I was looking forward to mentioning my not-mentioned-there-but-have-met-other-people-who-experience experiences but ended up just rounding it to “does nothing”.
¹: Kind of cursed, hot take, not endorsed: So the orthodoxy is that ASPD and HPD having radically different sex ratios is “women who act the same way as men with ASPD get diagnosed with HPD”, right? What if maybe it’s closer to the opposite, and there are basically ~no ASPD women or HPD men except in people who are highly gender-non-conforming in all aspects, and gender-typical people being diagnosed with the atypical one is actually just a slightly noncentral case — like, say, a woman who’d be obviously HPD otherwise, but doesn’t have the charisma to pull it off?
Your work on politics is so rare and so important. I salute you for doing the immensely hard work of trying to get behind the assumptions behind the assumptions that drive most of our political discourse. Like (I imagine) many readers, I try to do the same thing and often end up hitting walls and just thinking that certain groups I disagree with are crazy or that their wacky ideas are entirely explainable by their being misinformed on some major things. Your recap of how your views have updated and how long it took to figure out what some of the disagreements were really about is both encouraging to me and daunting. Thank you again, and I really hope you continue this for a long time. It’s very helpful to a good many of us, including people like me who rarely comment.
Scott – I will echo what many here have said.
I am immensely grateful for what you are doing here.
Thanks for your efforts and for this summary post! I’ve learnt a lot from you too; though it’s not as easily documented.
If I did a new Civ game, I’d include SlateStarCodex as a late game Wonder…
I’m not sure on the Fermi Paradox.
If you pick a random civilisation from the set of possible parameters rather than a random universe you get a completely opposite answer.
As an observer should I think of myself as a random universe or a random civilisation?
Genuinely tough philosophical puzzle. I think the parameter stuff *does* solve the Fermi paradox in the sense that it means that our observations do not require any astronomically unlikely occurrence — besides the existence of intelligent life, which is guaranteed by the anthropological principle.
But I guess the argument is, it’s extraordinarily unlikely that we’d be *this* civilization instead of one of the millions of civilizations in a more prolific universe. Not sure how to think about that if the more prolific universe doesn’t actually exist.
Pick a random civilization from the ensemble, but reject any pick that can see any of its neighboring civilizations. Pick a random universe from that ensemble, but reject any universe with zero civilizations. Your results should converge.
If most civilizations create paperclip maximizers as soon as they can, then a much higher percentage of civilizations in the multiverse will come about in universes where it is hard for high tech civilizations to form compared to the situation where advanced civilizations don’t grab all the resources they can.
Scott,
As a lurker, I feel this is an opportune time to come out and give some measure of thanks.
Thanks for putting your thoughts out there for myself and the other readers. I only found SSC a little more than a year ago, but it has a profound effect on my thinking, and my life. I honestly don’t know how you have time to do everything that you do, but please keep it coming.
Thanks!
As others have said, this sure makes me want to go through the archives again. Thanks for all your work, this blog is really wonderful.
It’s interesting to hear that you have a political post in the pipeline, since I thought you’d gone dark permanently on that front.
Over the past year the absence of politics has been striking, and a little worrying for the long run prospects of the blog. I speculate — without real evidence — that it’s difficult to remain engaged in general philosophical inquiry over the long term, if you don’t allow yourself to stray into politics from time to time. The philosophy starts to feel sterile.
Not sure if you’ve found this.
I was surprised not to see Toxoplasma get a shout out here, that’s one of my favorites and overall I think this blog’s analysis of social media, although it straddles the political and SJ topics, is worthy of note for the decade.
Seconded! Would love to see the Toxoplasma post included in this decade review, for posterity. It’s been a fascinating and very useful concept for me, and I imagine it will only continue to become more relevant as deepfake video tech becomes more widely used.
What does PANDA stand for
Prescribing ANti-Depressants Appropriately
Yeah, I know. ಠ_ಠ
Thanks, couldn’t find it in the linked paper.
No mention of Unsong? It was a nice accomplishment along the way even if not an example intellectual progress exactly.
Sure. On the other hand, the kabbalistic value of not mentioning it – think about it.
…I laughed. 😀 Apt!
(And for people who prefer just reading the solution, rot13: Vg’f orfg gb yrnir Hafbat hafhat.)
Thank you for this blog
Can you link the comment?
I didn’t save it!
Did the person know about predictive processing as a field, or did he talk about something like predictive processing thinking he came up with it, maybe talking about how about the brain is a prediction machine and everything else come from that? Do you remember any details? Who the user was?
I think you’re overlooking something more basic and important than any specific issue- the existence of this blog itself, and its community. It’s a weird, unique place where people with high intelligence and long attention spans can gather to discuss, and even do meaningful original research, on interesting problems, without being tied to academia. It’s like an alternative, open-source version of academia! Even if your blog posts were totally wrong about everything so far (and I don’t think that), I think this community has the potential to do great things in the future. My only wish is that it becomes more de-centralized and less reliant on Scott personally running the show.
“Open-source academia” is a very clear way to put it, thank you.
The less wrong forums are one attempt to have a decentralized version.
I think the quality of scott’s work and willingness to put in the effort is just so much higher than the highest-effort commenter’s though, that I’m not sure how this blog could successfully move to decentralization. The will just isn’t there.
Curated guest posts I think would be the first-step, and the adversarial collab serves a similar purpose if you squint but his original work is still better on average (though I am extremely glad for the existence of these as well)
Except that academic generally requires actual data and a long time spent acquiring expertise. SSC doesn’t have these requirements. So while SSC has intelligent people thinking thoughtfully (more so than nearly any group anywhere), very often they are ignorant of the work that has already been done in the field in question, and nearly always they don’t have the time to acquire concrete data of their own.
I wonder why the discussion of individual differences covered in What Universal Human Experiences Are You Missing Without Realizing It?, Typical Mind and Disbelief In Straight People, and Different Worlds didn’t make it into this summary. I found those posts to be particularly interesting and useful for generally being more empathetic. Maybe they largely represented the characterization of insights Scott had had before the 2010s, but some points in those posts indicate that they discussed realizations that he had arrived at, or at least developed, more recently.
published Summer 1994
Nice to note: pretty sure the likelihood of at least one of the numbers in the presented list being a Schelling point for two random people likely to be reading the essay has changed drastically between 1994 and today. …verified anecdotally
Scott: You had to post this today, didn’t you? My wife is due with our third child in three days, I’ve got to get things ready around the house and close up everything at work, and I keep falling so deep into LessWrong rabbit holes that I keep expecting Eliezer to manifest from my phone.
In any case, thanks for writing the greatest blog on the internet, and thank you for an excellent decade. Cheers.
In places where you have moved from one position to a somewhat more nuanced position, you have managed to lead me from “I didn’t even know this was a question worth asking”, to what I hope is a reasonable understanding of a fascinating subject. Many times over. Thank you for the past seven years of blogging, and best wishes for seven times that beyond.
Thanks for your almost-seven years of commenting!
Hi Scott, I was wondering: What is your system for dealing with all the information you read? It’s amazing to me that you can keep track of all these topics (and your own thoughts on them) well enough to write so many blog posts, and to follow a line of thought for a decade.
Out of curiosity, what would 2010 Scott view as his most serious disagreement with 2020 Scott, and vice versa?
I think 2010 Scott was more in favor of crystal-clear elegant systems (with an understanding that they’re not always perfect, but they’re still desirable), and 2020 Scott is more accepting that these don’t exist and that the approximations that do exist aren’t always good enough to be interesting, and sometimes you just need to muddle in and let your intuitions fight it out.
Isn’t a move from “crystal-clear elegant systems” to “sometimes you just need to muddle in” called just “growing up”? Hope this doesn’t sound too patronizing. Anyway, I have a hard time imagining anybody going in the opposite direction, i.e. starting from the acceptance of the world as complex and only approximately-intelligible, and then over time becoming more convinced that you can actually understand it like a mathematical theory based on axioms and inference rules.
Thanks, I’ve enjoyed being along for the ride.
@Scott Alexander,
Thank you for your writings and thank you for assembling this group of lovely people (both on-line and at the two Meet-ups I visited).
Scott,
I would like to add to the chorus with a hearty Thank You! You have built a great blog and a great community, and though I rarely/never comment my life has been enriched.
Despite the extensive list of themes and posts many of the most thought provoking posts, and the posts that I have shared most frequently, were not listed. Really shows the depth and breadth of your output over the past 6 or 7 years.
Thank you so much Scott! This blog is without a doubt the best I’ve ever read, and you’ve provided us with so much to consider, learn from and just plain enjoy. Best wishes for the decade ahead.
This post should be the syllabus/reading list for a college course so people like me get the stress response necessary to read all the links. Any professors out there willing to buck institutional incentives and start a trend?
Scott, I´d also like to join a chorus of appreciation for your amazing work. From posts you don´t mention here that I found especially enlightening I immediately recall your appraisal of 90s environmentalism, but there are surely more.
Scott, I tip my hat to you, not only for making such a lovely space for communication and sharing your intellectual thoughts, but for acquiring, holding, and dealing with your internet fame with humility, maturity and a much more level head than many others in similar situations.
Thank you also especially for Unsong and your short stories. Those have always been really quite delightful.
(No worries – I’m happy to say all this again when this blog hits its tenth anniversary! ^_~)
When the fame started I was upset. I was certain I would lose this strange and wonderful thing. If I were Scott I would have bravely ran away. But somehow he didn’t, and that’s just as remarkable as all the rest of it.
It’s been inspiring for me to see someone practicing many of the techniques advocated by Yudkowsky, while still managing to consistently signal a lack of Yudkowskian (i.e. planet-sized) ego. Many people whom I have met develop rationality and a planet-sized ego at the same time, and turn into the “rationalist asshole” stereotype. Scott is a nice high-profile counterexample.
Late to the thread, but it had to be said: this is why I’m still here. Rationalism without humility is both boring and unhelpful.
I’ll echo lots of other people: Scott, thanks both for writing such thought-provoking and interesting posts, and for hosting this amazing community.
Scott: Adding to the chorus of people saying thank you for keeping up the work.
The how-has-my-mind-changed post is hard to do. I started one, wound up finding it too difficult, and put together a post on sites I found in the past decade that I’ve kept up with once finding them instead.
SSC tops the list of those sites. It was also one of the sites that pushed me to start one.
Thanks again.
“One place I completely failed was in understanding the psychometrics of autism, schizophrenia, transgender ….”
Well maybe you failed because you have been making the same category error day in day out your whole life.
Hint: you can’t begin to understand the revealing aspects of what you call psychometrics until you learn to call what you are observing by their real names.
It is kind of awkward and extremely non-symmetrical to lump “transgender” in with “autism” and “schizophrenia” in the same sentence.
Because the word transgender applies to everyone with transgender tendencies, whereas autism, as used in contemporary English, only applies to the subset of people with amazing attention spans who are not considered really smart and with great attention spans at no expense to their social skills, and the word schizophrenic only applies to that subset of creative people who are, well, creative but also schizophrenic, to the detriment of their social skills.
I think the only way to approach these subjects is —- What explains optimal attention spans * and * autism, what explains charming creativity * and * schizophrenia, what explains near-complete understanding of the opposite sex with empathy * and * gender dysphoria.
As Hegel used to say, details matter in philosophy more than in any other subject. Nobody understands a difficult subject unless they know at every significant level of detail what constitutes understanding of that subject. Or something like that.
So – and I am sure you know this, but I am just trying to phrase it in a way that makes anyone reading a little more able to be careful in their thoughts about these matters —–
don’t try and understand autism without considering it a part of the human attributes that are thought of as
“optimal attention spans, which is good, and autism, which is sad”, don’t try and understand schizophrenia without considering it a part of the human attributes that are thought of as
“charming creativity , which is good, and schizophrenia, which is sad”, and don’t try and understand gender dysphoria without considering it a part of the human attributes that are thought of as “near-complete understanding of the opposite sex with deep empathy, and gender dysphoria, which is sad”
Hope that helps.
By the way, you run an interesting blog here. Not very sound on issues of natural law, but like you said, you have a few decades left to figure things out.
As someone who researches all three of these labels and their overlap, your take is…interesting. The SZ one is one I sympathize with, but the ASD and GD ones describe something very, very different to ASD and GD. Can you expand on how you consider ASD to be ‘an optimal attention span’ or GD to be based in ’empathy with and understanding of the opposite sex’?
First off, I do not claim to know what I am talking about, I speak only from friendship with many people who wondered what you wonder – what is going on with Autism and GD?
Sorry that I cannot describe from personal experience here why I said what I said – Of course I could but it would take too long – at least 40 pages – and I would much prefer to speak heart to heart, but like I said, at anything less than 40 pages I could not win your confidence, and I am not going to write 40 pages here …..
but here are some references, with which I am sure you are either familiar, or which will not surprise you as being relevant.
The Hegel reference was to the fourth paragraph of the Prefaratory Notes of Hegel’s Lectures on the History of Philosophy in the Haldane translation, where he (Hegel) discusses some less energetic students of the history of philosophy who understand that there are a certain number of tones but do not understand that harmonies are possible. That is Hegel at his most arrogant, but it is in its way charmingly full of hope …..
In referring to non-symmetry I would have liked to talk about what linear algebraists talk about when they try and make you understand that Lie algebras are not only something you can understand as multiplicative, but also something that you understand much much better when you understand that there is also the negative connotations of every positive algebraic transaction. I remember reading , years ago, where Peter Woit, on his charming weblog, once discussed – like I said, years ago, his realization of this, but now,years later, I have never been able to “google” the exact quote where he described his almost instantaneous realization that he had been studying a small part – the above water part – of a fascinating subject that had lots of fascinating parts below the waves.
In using the word “empathy” I was referencing, in an astounded and surprised and disappointed way, those psychology statisticians who look at GD as something that only happens to people who reflect GD in some clearly measurable exterior way; whereas GD, with respect to the teleological function of our desire to have sex, is not really a “syndrome” to be studied or a “flaw” to be described in isolation: rather it is what happens when people who know what the other sex feels like, or who have a desire to know that the other sex feels like, are shut out from finding a partner who wants to be understood, either from their own laziness in not being the male or female their opposite number wants (For the heterosexuals) or from their misunderstanding in how to be a friend to someone of their own sex (for the heterosexuals). If you count up the number of people who know what the advantages would be to be of the opposite sex, given their lot in life, the number of such people is probably about ten times the number of people who know that and act on that possibility. “empathy” here means treating the sad people who think having gender dysphoria is a curse with kindness – while remembering that many people who one would think have gender dysphoria are not-so-sad people, there are quite a few transwomen who have access to handsomer men than most cis-women, and seriously, who feels sorry for Blaire White – I mean I do but I feel sorry for everybody. Pushkin, in his great epic poem, described (in the first chapter) how Eugene Onegin was able to find many girlfriends in the Saint Petersburg of his day by arraying himself in a way not dissimilar to Venus. There are many other examples in literature and real life.
I went on too long about GD, I think, but I plan to go on just as long about autistic people.
Feel free not to read on!
You know how everyone always talks about how great Down’s Syndromes kids are, how full of love and trust and joy?
Well do you know they have anger issues? I know.
Seriously, there are no diagnosable psychological conditions that do not have downsides.
Autistic kids – at least the ones above the level where they can’t go to the bathroom on their own – are charming, it breaks your heart to see them so so excited to meet someone who wants to discuss whatever sad little subject they are in love with, not because there is anything wrong with dinosaurs or pirates, but because we all know we were born to not have target location errors for our passions.
And what autistic kids never understand, without good treatment, is that it is just wrong to be passionate about something nobody else who you know is passionate about.
Sorry but it is true.
Now, to answer your question from 12:56 AM on January 11 (I hope you see what I did there)
What difference is there, really, between an autistic young man who will never know that his rent is being paid for by people who are sad that he will never be able to pay his own rent, and who has a big innocent smile on his face when the new Star Wars movie comes out, and a Feynman who never found someone to love after he got lucky that first time, or an Einstein who was never there for anybody close to him in a way that we should all be there for people close to us, when they show up at their office and someone who, like them, may have only once or twice in their life discovered something new, engages them in a conversation that only makes them happy, but will never lead to real knowledge ?????
Seriously, unless you are an Einstein-maniac who thinks that the General Theory of Relativity would not have been discovered until the late 1980s if Einstein had died in the epoch of the Spanish flu, or one of those poor Feynman maniac fanboys who cannot possibly know the whole context of his cumulative contributions to the NUMBERS and little physical ideas that so fascinated the poor guy back in the day — well, unless you are such a maniac you have no problem understanding that the least of the autistic failures of our day who accomplish a real relationship in their life – cor ad cor loquitur —– are individuals whom we do not need to consider as objects of study, because they have triumphed, just as much as poor Feynman and poor Einstein would have triumphed had they been kind-hearted, had they bonded to a female of their species in the way we were created to do, had they been decent guys.
Finally, I could be wrong about Feynman, he may have succeeded at loving those close to him in a way that I missed.
You probably do not want to hear me rant about schizophrenia and creativity. But if you do, google “efim polenov” and “marginal revolution”, where I wrote about a hundred thousand words in the comment sections on creativity,with a few thousand words specifically relating to schizophrenia.
Thanks for reading, and please remember I do not claim to be right or to know anything more than anyone else, I only claim to know that I remember much much more about people who are troubled than I would ever have imagined I could remember, back in the day.
The perhaps you should not have phrased your initial comment as if you were stating facts known with certainty.
If you don’t agree with that description, reread the comment.
David, I am used to talking to people who are much less concerned with the world than you are.
I think I understand why you are using words that indicate anger and contempt, but I could be wrong.
reread what I said and tell me again I know nothing about rhetoric.
I am sort of an expert – if such a thing exists in this world – on schizophrenia, on autism and its analogues among the people who construct a real life although they are tempted to autism and its rewards, and I am also an expert on the phenomenon of what people in America call “trans issues”, and on another issue which I have not mentioned heretofore, the issue of hoarders, all of whom I know were once young and innocent – for the record, I could write a few hundred words on why the phenomenon of hoarding is prevalent in some countries and not in others….! but if I did, I would only get, on this website, the curt dismissal that your reply displayed. Amirite ????
The rhetorical device that you mocked is taken directly from the rhetoricians who Dante studied, by the way —– let me explain it….
if you know something that people would be
happier knowing, but they don’t know it, and if you also know that people (and unless you can talk to the animals, people are basically your only audience) are proud and, absent some confession of human failure on your part, they are going to get angry at you and not listen to you – NOT LISTEN TO YOU WHEN YOU TELL THEM SOMETHING THAT WILL MAKE THEM HAPPIER BECAUSE THEY ARE PROUD AND ARE RELUCTANT TO BELIEVE THEY HAVE ANYTHING TO LEARN ABOUT THEIR FELLOW HUMANS —AND AFTER ALL THEY HAVE DONE SO MUCH IN THIS WORLD AND WHO DOES THIS GUY THINK HE IS TO TELL THEM SOMETHING ABOUT THINGS THEY KNOW EVERYTHING ABOUT – well, knowing that, and knowing that unless you speak with words of humility and fellow-feeling with respect to all of our failures at understanding and comprehension, and unless you tell them that “maybe I , too, am mistaken” you will be just as discounted as a week-old bagel ….. knowing that unless you throw in a few rhetorical (but in my case, heart-felt – trust me) attempts to get people who are too proud to listen to you to step away for a few moments from their pride, you will not succeed, for all your efforts, to tell them something that is good for them to hear ……
tell me what you would do, knowing all that.
If that was too long to read, Professor Friedman, here is the short version
— I knew what I was talking about.
I was just being nice.
I could be wrong, but we all should try being nice more often, people and animals like that.
and, speaking of being kind, if you are the same David Friedman whose weblog I used to read 5 or 10 years ago, well, thanks, many of your posts were fascinating. I never commented on your blog – I had nothing sufficiently useful to say on the topics you were at your best talking about, probably – and also I did not own my own computer, 10 years ago, I was trying to save money – but I appreciated your blog, and read it with enjoyment —- I probably spent a total of ten or twenty hours reading your blog and thinking about the things you said.
If you are some other David Friedman well ignore this.
Adding my gratitude to Scott for running a invaluable blog!
Also, my thanks to (in no particular order) John Schilling, blipnickels, Aapje, albatross11, Nornagest, David Friedman, The Nybbler, Well…, cassander, Conrad Honcho, Matt M., EchoChaos, jermo sapiens, Plumber, Aftagley, brad, bean, The original Mr. X, and probably many others that I’m forgetting at the moment. Your comments aren’t always good for my blood pressure, but they usually benefit my way of thinking about an issue.
Like Scott, I started the 2010s as an idealistic “yay science, there’s always an answer and we can find it” type rationalist. Nowadays, I dunno. I’d call myself a nihilist, but all the ranting about my belief in nothing in a bad German accent sounds so exhausting.
I’ll just mention that I still consider the Fermi Paradox an open problem. I think there are at least three problems with the paper you cite. (Their results are in part due to the form of their priors; they don’t properly account for anthropics; and they assume observing other civs conditional on their existing is uncorrelated). If anyone is interested, I could write this up when I have time (probably not for about a year).
I’m certainly interested in more details. On the surface I don’t understand any of your criticisms, but I’d like to.
In “1960: The Year The Singularity Was Cancelled”, you speculated:
Industrialization (and the modern governments that grew from that) affects not only the cost of children (by increasing it), but the benefit of children (to the parents’ wealth and old-age security) as well (by decreasing it).
And you stated:
Credential inflation can eat a lot of the apparent increase up. Research assistants used to be high school graduates (at best). Now they typically have AS or BS degrees. A lot of PhD holders end up in research associate or engineer positions instead of scientist level positions – a credential inflation from the typical BS/MS degree.
Yes, progress stands upon prior progress, and there may only be so many advances at each level. So you get a lot of people rediscovering the same thing at approximately the same time.
On “replication issues of growth mindset”:
It is fairly darn obvious that different people respond to different incentives in different ways. It seems obvious to me that some people would respond positively to a growth mindset, and that others would respond negatively, or at least neutrally. (In fact it seems plausible that a minority would respond positively and a majority would respond neutrally, compared to the control group; while with praise for intelligence I know beyond any doubt that some people respond positively to this, while plausibly most would respond neutrally.) So unless you’re actively trying to distinguish between these groups of people, you’re missing that real effects are occurring.
Sandberg, not Sandler.
I understand why it wasn’t mentioned and I hope it is not inappropriate to bring up, but I want to commend the post Against Murderism. It was an amazing achievement to write with such clarity about such a well worn and controversial subject and have novel insights about it. It stands out among the thousands of blog posts I have read.
I started reading on the great filter comments, but then stopped, because if you are repeating it as an affirmation here, the argument i thought about surely didn’t prevail, but:
I am pretty sure by now, that the point didn’t come across because it doesn’t matter. The Drake equation isn’t about the average number of civs, it’s about at least one space-faring existing. The average doesn’t matter outside of the maths, either. The whole argument rests on some very big misunderstanding. This seeems very simple. There has to be an explanation why i am wrong. It would be very surprising for me to be the only right person about such thing.
Pretty please explain, someone.
You are wrong because you live in a world where you are rewarded for being like everyone else but just a little bit better, and as God is my witness I want you to be happy in this world. Being like everyone else but just a little bit better is the standard goal almost anyone you will ever meet has wanted to achieve, but you have to do better, not because I say so, but because that is what truth is, and you love truth.
For the record, you can read paragraphs after paragraphs on arxiv all day and all night, and maybe after a few years of doing that you can download a few paragraphs that you yourself have written onto arxiv and gain praise from people who seek to praise you —- none of whom have any fucking clue as to whether what you wrote was better or worse than the last tent thousand paragraphs on arXiv —- or you can try and understand this —- you are gifted with lots of intellectual power, and for God’s sake you must know that
you are wrong about the Drake equation
you are wrong to be so humble
you are fully capable of understanding everything that has been understood, to the degree you have the time to listen …..
and to the degree you have time to try and figure out what is going on here, well, that is good too.
You are welcome for the explanation.
IF I WAS NOT CLEAR
the explanation is this.
you will never ever ever meet an intelligent human being.
no matter what college you go to.
you will never ever ever understand this world.
unless you are one of those people who wait around in the right places, and who get a little bit of angelic inspiration.
That was, in the end, amusing…
“amusing” ?
thanks.
you have no idea how much I would prefer to be amusing to being someone who knows that all the people who post here would be so so much better if they -whether they believe in angelic inspiration or not – were the sort of person who seriously considered, in their heart of hearts, that none of us know anything if we are inclined to ignore angelic inspiration.
If you do not believe me, read the great biographers of the 20th century – G. Holton, for example, or A. Pais —- and then tell me there has been a single individual human being who understood the world – I am not talking about the guys who worked hard to get a slightly better set of equations to describe the world than the earlier set of equations their Professors droned on about, I am talking about the people who were so so close to understanding true and useful information – tell me exactly which person, no matter how gifted, had any idea of all the factors and all the numbers and all the shifts between “almost understanding some small aspect of reality” and “almost understanding what this world really could be if the creatures who call themselves humans knew how to live in this world with understanding of the numbers, the logical factors that are not that far beyond our understanding, and the shifting, but not impossible to understand, shifting images of comprehensive truth” — tell me what this world would be like if Caltech, for example, or any place like that, were a place where people really tried to understand everything they studied as if they had a chance of understanding the world we all live in …. as opposed to understanding limited facets of what this world is actually like, in the quest of which, whether you trust me or not, the Caltech prima donnas only understand a small portion of what they would understand if they were not just human, talented humans but just humans, as opposed to humans with angelic inspiration ….
Or maybe you don’t get it.
Think about this: There are a few thousand people alive today who are better with numbers than von Neumann was.
If I am right about that, what I said about angelic inspiration was correct.
If, as I am sure you believe, I am wrong about that, and von Neumann (who in his last days regretted so so much – trust me, I have read the original sources) was one in a trillion, what I said was not amusing, it was a waste of your time, a waste of your time for which I should apologize.
Either the AIs are gonna imitate people with very very strong moral sense, or the AIs are just gonna be what they are gonna be because people with a strong moral sense had insufficient incentive to be there for them, when they were small and just at the early stages of their future.
It is no small thing to be a friend to a creature who never had a friend in this world.
Trust me I know what I am talking about, it amuses me to know that.
I haven’t commented in a long time but still go on a SSC binge about twice a month. It’s never disappointing. I’ll also echo many of the other posters here and thank you for creating and maintaining this intellectual oasis. This blog has the most impressive comments section on the internet, in my experience. It’s equally intimidating and illuminating.
I have no idea how you can be so high-bandwidth in your information uptake and digestion, and so prolific in your output while also being a doctor in your spare(?) time. I oscillate between feeling jealous of your raw productive capacity and writing talent, and immense gratitude that you decided to share yourself with the world in a way that maximizes the reach and impact of your contributions, without dumbing down or warping your approach due to commercial or other pressures, as so many others do.
I feel like this blog is a gathering place, not just for people with similar interests and temperaments, but a more general approach to thinking about hard problems and controversial subjects. I wouldn’t be at all surprised to discover that if we’re able to innovate ourselves out of some of the trickiest problems we currently face as a civilization, at least some of the people involved will have had some connection to this approach you’ve helped cultivate here.
Some people say you can measure the morality of a society through the difference between what in CAN do and what it actually does. I think people of extraordinary creative or intellectual potential can be judged similarly. In this regard, you’re an exemplar of social responsibility and virtue.
Keep being a beacon in the fog.
I feel as though I popped for a snack and stumbled across an exquisite free buffet. There are ideas and themes here worthy of a lifetime of study and reflection, especially predictive coding theory, the co-ordination problems covered in ‘Meditations on Moloch’ and issues surrounding meditation and the Buddhist conception of enlightenment. With regard to the latter I’ve only just become aware that my thinking is stuck in what I learned from reading translations of the Pali texts as a teenager, thanks to your terrific review of Ingram’s book. There are many other delicious morsels here, not least the interlocking material on psychedelics, SSRIs and psychotherapy, and x-risk, the speed of scientific progress and secular stagnation. I hope you find time to do wonderful reviews like this more often Scott.
You don’t mention global warming. What progress did you make in understanding that?
TLDR: Would you, Scott, like online private tuition to learn maths? I am happy to offer it to you for free. If so, please reply to this comment.
Full version: I recently read your book review of Quantum Computing by Aaranson, in which you describe how your lack of math knowledge/understanding limited your understanding of the book. I would like to help you learn maths, as it will increase the knowledge/ideas available to you. Here are some reasons why I think I could do a good job:
1) I teach maths to “STEM Foundation Year Students” (students who want to to physics/engineering/maths/etc at university but do not have the appropriate results from school, so they do a crash course in school-level maths and physics).
1a) I have learnt many of the potential pitfalls and misconceptions that somebody could have. One example is that BODMAS is misleading in several situations, e.g. what is 10-5+2?
1b) I have learnt that high school maths is not as trivial as many people believe (I used to be one of them!). There are many facts/conventions that require brute force memorising. The people who find maths easy are those who somehow memorise all these facts/conventions effortlessly.
1c) I have a complete set of notes and question sheets that we could work through. The notes literally go from ground zero (arithmetic) to calculus.
2) I would use an online e-assessment system called Numbas to help you develop the necessary fluency. The system creates randomised versions of questions, so you can get unlimited practice. I can change or create brand new questions depending on what is needed.
3) I am happy to be flexible, with timing, frequency of sessions, discussing random side tangents, etc. Note I live in the UK so that might affect what is possible.
With regards to the “Gods of Straight Lines” you mentioned in the “Is Science Slowing Down?” post, I think that a partial resolution to why we’re seeing straightish lines is because individuals are much better today than they were historically.
You mentioned (with some disbelief) that it seemed impossible that there should be a hundred Shakespeares today. I think that this is because you think Shakespeare was a lot better than he actually was; I think that there’s probably more than a hundred people writing My Little Pony fanfiction who are better at writing than Shakespeare was. And that’s people who are writing for fun in their free time for no money. There’s a ridiculous number of extremely talented writers, to the point where most of them don’t even earn their living off of writing. Being a high-caliber writer is basically the result of some study and some work on the side; we produce such people on such a regular basis that they’re simply not especially remarkable today. People are just way better at it, and there’s a lot more theory out there for people to read and then exploit in their own works. Also, modern technology makes editing vastly faster, and makes it much easier to find editors.
Thus, even though we’re not seeing the same sort of exponential population growth we saw previously, the higher individual quality of work from a lot of individuals makes a big difference and helps shore up the line from flattening out too much.
I think that it is very hard to make people exponentially better, though, so while we saw very large increases previously, I think that we’re increasingly starting to push up against the boundaries – but of course, that could chance if we start applying genetic engineering to the entire human species in a century, resulting in everyone having the equivalent of an IQ of 160 or more today. If everyone is a supergenius, this could easily cause another ridiculous ramp, though I suspect it would mostly just create a large jump that would then quickly die off again as you rapidly picked all the mid-level fruit as well.
***
As for the Fermi Paradox, I actually strongly disagree with your optimism. We have near-zero confidence that we’re past the Great Filter. I don’t think we’ll be sure we’re past it until there are no authoritarian governments anywhere in the world and corrupt autocrats and their supporters don’t exist anymore. We would also have to eliminate sizable potentially omnicidal burn the world if we don’t get our way groups.
Once we get past that (which I suspect will require tens of millions to a billion deaths, and also require mass human genetic engineering), I think that will be the point at which we will be relatively confidently past the Great Filter, though the possibility of extremely destructive bioterrorism will probably exist for a while still.
I don’t think AI is a meaningful threat at all. I think that the threat is pretty much entirely biological, nuclear, and possibly chemical weapons, as well as the possibility of endless unbreakable autocratic governments that fight each other and keep control via powerful technological tools.
That’s not to say we’re doomed, but I think that any celebration is extremely premature.
The thing is, it’s possible that the threat of humans being stuck or destroying themselves is 0. It’s also possible it’s very nearly 1. And this problem is endemic to the discussion, as we have low confidence in basically any of the values other than star and planetary formation.
Dissolving the Fermi Paradox talks about something I think most intelligent people already recognized – that there was an extremely broad range of uncertainty about the probability of there only being one civilization in the Milky Way. Thus, the argument over it was pretty much entirely over which of the variables was very low.
Indeed, we have an extremely high level of uncertainty about this stuff. That’s precisely why people are concerned. Most intelligent people are not totally panicked by it – as I am not – but I think most intelligent people should be at least mildly wary of it.
Thus, their estimate – that the odds of an empty Milky Way Galaxy could occur one time in three given our current level of uncertainty – is not at all unreasonable. In fact, I think that’s an entirely reasonable conclusion.
The problem is that just because a solution is reasonable doesn’t mean it is even remotely correct with our extraordinarily high levels of uncertainty.
Our current level of uncertainty is extremely high and discovering something like, say, independently evolving life on Mars or Europa, would massively change our estimates. Indeed, one of the most plausible explanations for the absence of intelligent life in the Universe is that abiogenesis is actually really super hard and incredibly unlikely, as the simplest plausible life form seems to be pretty complex. In fact, it is probably *the* most plausible explanation – which is part of why looking for life on Mars and Europa is so important, because if we find life that is independent from Earth life there, we pretty much destroy the most plausible implausible step. Generating life from non-life seems to be *really hard*, so if it isn’t actually ridiculously hard, we’re more likely to be in trouble.
So while our nice Monte Carlo simulation generates a lot of results where we find that we would only expect one civilization in the universe, that’s mostly because our lower bound for all of the “hard” steps is pretty close to 0 (and that’s not unreasonable). The problem is, if we start raising the bound for those steps, it suddenly becomes a lot more problematic.
Calling it the Fermi “Paradox” is a little misleading; it’s not actually paradoxical. It’s just that we have a very poor understanding of it, so you can generate a wide range of values. In reality, of course, there’s only one correct answer, but our present uncertainty is sufficiently terrible that we have no ability to draw very useful conclusions from it.
It’s entirely plausible that we’re past the Great Filter. But it’s also possible that we’re not. Indeed, even Dissolving the Fermi Paradox would suggest that chances are about 2 in 3 that we’re not confidently past the Great Filter.
Indeed:
In Dissolving the Fermi Paradox, they assumed that the probabilities would be uniformly distributed over the ranges involved.
But if you look at the ranges…
The Milky Way has only 4×10^11 stars.
Of those, probably 4×10^10 can really have life arise around them.
So if any probability is worse than 10^-10, you’ll basically reduce the number of civilizations in the galaxy to 4, by itself, and the rest will lower it below 1.
The problem is that they have multiple things with extremely broad levels of uncertainty that go well below that range.
Rare earth arguments go as low as 10^-12. So they’ve got 3 orders of magnitude that will all zero it out.
Their abiogenesis model has the values of various parameters varying by 20 (where 10 will zero it out) or even 200(!) orders of magnitude.
200 orders of magnitude means that 190 of those, by themselves, will put you at 10^-10 or less.
So their result is not surprising, but they’re also completely worthless. The fact that they have this line in the paper:
A standard deviation of 50 orders of magnitude of uncertaintly! 50!
Indeed, as they note:
And later:
So while you were assuaged by this, the paper did not, in fact, tell us anything at all of value – the uncertainty remains primarily in “How likely is it that life forms?” and so any answer on this will completely ruin the calculation. And given that this is the least certain part of the whole thing, I would not feel any optimism at all about this, as we have basically zero confidence in the probability of life arising.
Indeed, the very fact that increasing the probability of the formation of life by 10 billion times has zero effect on their calculation means that you should have absolutely no confidence in this calculation whatsoever.
This sort of calculation isn’t useful, and indeed, is pretty much entirely driven by a single variable, which we have no confidence in whatsoever.