Highlights From The Comments On Kuhn

Thanks to everyone who commented on the review of The Structure Of Scientific Revolutions.

From David Chapman:

It’s important to remember that Kuhn wrote this seven decades ago. It was one of the most influential books of pop philosophy in the 1960s-70s, influencing the counterculture of the time, so it is very much “in the water supply.” Much of what’s right in it is now obvious; what’s wrong is salient. To make sense of the book, you have to understand the state of the philosophy of science before then (logical positivism had just conclusively failed), and since then (there has been a lot of progress since Kuhn, sorting out what he got right and wrong).

The issue of his relativism and attitude to objectivity has been endlessly rehashed. The discussion hasn’t been very productive; it turns out that what “objective” means is more subtle than you’d think, and it’s hard to sort out exactly what Kuhn thought. (And it hasn’t mattered what he thought, for a long time.)

Kuhn’s “Postscript” to the second edition of the book does address this. It’s not super clear, but it’s much clearer than the book itself, and if anyone wants to read the book, I would strongly recommend reading the Postscript as well. Given Scott’s excellent summary, in fact I would suggest *starting* with the Postscript.

The point that Kuhn keeps re-using a handful of atypical examples is an important one (which has been made by many historians and philosophers of science since). In fact, the whole “revolutionary paradigm shift” paradigm seems quite rare outside the examples he cites. And, overall, most sciences work quite differently from fundamental physics. The major advance in meta-science from about 1980 to 2000, imo, was realizing that molecular biology, e.g., works so differently from fundamental physics that trying to subsume both under one theory of science is infeasible.

I’m interested to hear him say more about that last sentence if he wants.

Kaj Sotala quotes Steven Horst quoting Thomas Kuhn on what he means by facts not existing independently of paradigms:

[Kuhn wrote that]:

A historian reading an out-of-date scientific text characteristically encounters passages that make no sense. That is an experience I have had repeatedly whether my subject is an Aristotle, a Newton, a Volta, a Bohr, or a Planck. It has been standard to ignore such passages or to dismiss them as products of error, ignorance, or superstition, and that response is occasionally appropriate. More often, however, sympathetic contemplation of the troublesome passages suggests a different diagnosis. The apparent textual anomalies are artifacts, products of misreading.

For lack of an alternative, the historian has been understanding words and phrases in the text as he or she would if they had occurred in contemporary discourse. Through much of the text that way of reading proceeds without difficulty; most terms in the historian’s vocabulary are still used as they were by the author of the text. But some sets of interrelated terms are not, and it is [the] failure to isolate those terms and to discover how they were used that has permitted the passages in question to seem anomalous. Apparent anomaly is thus ordinarily evidence of the need for local adjustment of the lexicon, and it often provides clues to the nature of that adjustment as well. An important clue to problems in reading Aristotle’s physics is provided by the discovery that the term translated ‘motion’ in his text refers not simply to change of position but to all changes characterized by two end points. Similar difficulties in reading Planck’s early papers begin to dissolve with the discovery that, for Planck before 1907, ‘the energy element hv’ referred, not to a physically indivisible atom of energy (later to be called ‘the energy quantum’) but to a mental subdivision of the energy continuum, any point on which could be physically occupied.

These examples all turn out to involve more than mere changes in the use of terms, thus illustrating what I had in mind years ago when speaking of the “incommensurability” of successive scientific theories. In its original mathematical use ‘incommensurability’ meant “no common measure,” for example of the hypotenuse and side of an isosceles right triangle. Applied to a pair of theories in the same historical line, the term meant that there was no common language into which both could be fully translated. (Kuhn 1989/2000, 9–10)

While scientific theories employ terms used more generally in ordinary language, and the same term may appear in multiple theories, key theoretical terminology is proprietary to the theory and cannot be understood apart from it. To learn a new theory, one must master the terminology as a whole: “Many of the referring terms of at least scientific languages cannot be acquired or defined one at a time but must instead be learned in clusters” (Kuhn 1983/2000, 211). And as the meanings of the terms and the connections between them differ from theory to theory, a statement from one theory may literally be nonsensical in the framework of another. The Newtonian notions of absolute space and of mass that is independent of velocity, for example, are nonsensical within the context of relativistic mechanics. The different theoretical vocabularies are also tied to different theoretical taxonomies of objects. Ptolemy’s theory classified the sun as a planet, defined as something that orbits the Earth, whereas Copernicus’s theory classified the sun as a star and planets as things that orbit stars, hence making the Earth a planet. Moreover, not only does the classificatory vocabulary of a theory come as an ensemble—with different elements in nonoverlapping contrast classes—but it is also interdefined with the laws of the theory. The tight constitutive interconnections within scientific theories between terms and other terms, and between terms and laws, have the important consequence that any change in terms or laws ramifies to constitute changes in meanings of terms and the law or laws involved with the theory (though, in significant contrast with Quinean holism, it need not ramify to constitute changes in meaning, belief, or inferential commitments outside the boundaries of the theory).

While Kuhn’s initial interest was in revolutionary changes in theories about what is in a broader sense a single phenomenon (e.g., changes in theories of gravitation, thermodynamics, or astronomy), he later came to realize that similar considerations could be applied to differences in uses of theoretical terms between contemporary subdisciplines in a science (1983/2000, 238). And while he continued to favor a linguistic analogy for talking about conceptual change and incommensurability, he moved from speaking about moving between theories as “translation” to a “bilingualism” that afforded multiple resources for understanding the world—a change that is particularly important when considering differences in terms as used in different subdisciplines.

Syrrim offers a really neat information theoretic account of predictive coding:

Suppose you have an alphabet composed of 27 letters (the familiar 26 plus a space). You are interested in encoding it in binary for transmission. Of course you want to use as few bits as possible. How might you go about doing this? The first suggestion would be to assign each letter a bit patter of equal length. In this case, your transmission will take 4.76 bits each. You realize that in english some letters occur much more frequently than others, and to devote the same number of bits to each is wasteful. You find a table recording letter frequencies in common english texts, and reassign the bit patterns to give shorter values to more common letters. In this way, you reduce the number of bits needed to 4.03 per letter on average. Next you realize that some letters are followed by others even more commonly than they appear in normal text. Encoding the bit patterns based not only on the letter in question, but also the previous one reduces your usage to 3.32 bits per letter.

Now we play a game. A person is asked to guess what the current letter is. We tell them if they got it right or wrong. The right answer advances the current letter. They might initially guess the letter ‘t’. If they are right, they might further guess ‘h’. Getting that wrong, they could try ‘a’, and so on. The answer to each question, being yes or no, encodes a single bit. We record how many questions they ask over some long text, and therefore find the number of bits per letter to be 1.93.

(This example derived from Science and Information Theory by Leon Brillouin)

In this latter game, we ask the participants to guess (predict) what a letter is, and therefore define an encoding (coding) for each letter. The method by which a person performs this prediction is twofold. First, they have some idea what the text is saying, and therefore what it will say next. Second, every time they receive a negative response, they realize the text is saying something slightly different than they guessed, and so change their prediction for future letters.

The use of bits highlights an important practical application of all this. When you see some text as I am writing here, you see 4.76 bits for every letter (more, because of capitalization and punctuation and what not). And yet you require only 1.93 bits in order to know what is being said. The extra 2.83 bits take the form of redundancy. If I made some spelling error, or you read what I said particularly quickly, you might miss one of the letters I intended to convey. Yet because you have so many extra unnecessary bits, you can recover what is lost. This is done the similarly to how it was done in our game. As you read, you expect some letter to come next. When you encounter a slightly unexpected letter, you would update your expectation to account for it. When you encounter a completely unexpected letter, you might ignore it and continue as if your expectation was met.

To tie into the card example. A playing card contains log2(52) = 5.7 bits of information. If you are flashed a playing card very quickly, you might only have enough time to get 5.7 bits of information out of it. In this case, you would be forced to assume it is a playing card. If you have more time to look at it, you might be able to extract more bits, but even then, you might so heavily expect a playing card, that you ignore other possibilities.

Going back to the game: A person is allowed to ask which letter is next. But what makes the answer a single bit doesn’t depend on the nature of the question, only the binary nature of the answer. We could permit any yes or no question and still count bits by the number of questions. We then get into the interesting game of what question to ask. If someone had no clue what letter would follow, and wanted to determine it as quickly as possible, they might ask if it appeared in a particular half of the possible letters or the other. Or if they feel sufficiently confident in their guess, they might guess two or more letters at a time. (Brillouin points out the value 1.93 for the number of bits per letter must be too high because we force the player to ask for the letter even when it is obvious)

Now the playing card. You ask: “is it red or no?”, (no), “is it spades or no?”, (no). The prevailing paradigm implies that you now have complete information on the suit. “then it must be clubs?” (no). Once you realize that these are fake playing cards, you ask about the color and the suit independently. One could do a treatment of paradigms in science in a similar way: “is it a particle?” (yes) “then it isn’t a wave?” (no). “wait what?”…

Michael Watts writes:

I find [the quote about dormitive potency] very interesting, because the paradigm everyone mocks (according to this) is the same paradigm current in medicine today.

Years ago, I started to have a problem with the skin on my fingertips peeling off. This got to the point where I consulted a doctor, and he told me “we call this desquamation, which means “it’s peeling”. We don’t know why, and there’s nothing you can do about it.” Eventually, it cleared up by itself. We don’t know why.

There’s an old joke among doctors (at least I hope it’s a joke) that if you don’t know what a patient has, you just repeat their symptoms back to them in Greek or Latin:

“I get headaches at night and I don’t know why.”

“You have idiopathic nocturnal cephalgia.”

“Wow, you figured that out so quickly! Modern medicine really is amazing!”

JP corrects some of my terminology:

It would be better to distinguish more clearly between schools and paradigms. Copernican astronomy, Newtonian mechanics and Predictive Coding are all schools. Only the former two were paradigms; that is, largely unchallenged and generally accepted. In the non- or prescientific stage medicine, psychology, … are currently in, there’s plenty of competing schools, and therefore no paradigm. What is required is an exemplar that sets the stage for a consolidation: a paradigmatic, i.e. paradigm-building, explanation for a phenomenon, after which everyone models their own explanations from hereon. For example (my example, not Kuhn’s), Darwin proposed a particular explanation for how the birds he found on the Galapagos islands got to have their beaks. Since then, a story about how something is in biology counts as an answer if and only if it has the same form as Darwin’s explanation.

Constructing such explanations following the form of the exemplar is the process of Normal Science, which a truly scientific discipline is mostly engaged in: solving puzzles. That sounds dismissive, but solving a puzzle might be as interesting as explaining how birds came about – not just on Galapagos, but in general – that is, they’re dinosaurs. Exciting!

I think the summary is also light on some of what Kuhn in particular was most interested in: in particular, incommensurability. Yes, Kuhn did indeed claim that we can make statements about the falsity of something only from within a certain paradigm (or school). Now Kuhn has plenty of inventory for talking about how a particular school might be thoroughly useless (i.e., it can be inconsistent and utterly fruitless) , but “empirically false from an objective, out-of-paradigm point of view” is not amongst them. In fact, it is inherent especially to a science following the highest standards that it is deeply embedded in one particular worldview, or one might say, ideology.

From John Nerst:

Kuhn gets overinterpreted a lot by people who like to push various species of relativism. As I see it, such overinterpretaton results from taking conclusions that only apply cleanly in the limit case and generalizing them to the whole domain. In this view the parts of a paradigm are all precisely dependent on each other for meaning to such an extent that if a paradigm is only somewhat different from another it is completely different and therefore not comparable at all and the distance between them is not meaningfully traversable. Paradigms are internally integrated and coherent, and insulated from each other. You have to pick one because it’s impossible to mix them, and outside of a particular paradigm a concept means nothing at all. In or out.

Real science isn’t like this, and therefore conclusions that follow from this don’t necessarily apply. Kuhn uses examples that suggest it, but as many have said since then he kind of cherry picks and generalizing the pattern and using it to draw far-reaching and radical conclusions of science as a whole is, well, an overinterpretation.

In real life concepts are both a bit vague and meaningfully more-or-less different (instead of just “the same” or “different”, full stop) in a way that makes it possible and in fact common to compare paradigms and pieces of paradigms (pieces that can be moved around without losing all of their meaning). This is because what we have are typically paradigm-like structures that overlap partially and are at least somewhat reconcilable. This is pretty true in the physical sciences and very true in the social sciences.

The ideas in TSOSR are valuable not because they describe science perfectly but because they work as a corrective to the prevailing view at the time. It’s one pole, and adding it to what we already had creates a new space (a spectrum where there used to be a point) which is great, but it’s important to remember that the new pole isn’t the whole space. To understand science you need both that side of the story and the fact-gathering/positivist/naive inductivist/whatever one. Generalizing only that facet gets you to the wrong place just as much as generalizing only the logical positivist side (or the falsificationist one if you want to get all multidimensional) does.

Virgil Kurkjian gives some eamples of Kuhn explaining how words have different meanings across paradigms:

Revolutionary changes are different and far more problematic. They involve discoveries that cannot be accommodated within the concepts in use before they were made. In order to make or to assimilate such a discovery one must alter the way one thinks about and describes some range of natural phenomena. The discovery (in cases like these “invention” may be a better word) of Newton’s second law of motion is of this sort. The concepts of force and mass deployed in that law differed from those in use before the law was introduced, and the law itself was essential to their definition. A second, fuller, but more simplistic example is provided by the transition from Ptolemaic to Copernican astronomy. Before it occurred, the sun and moon were planets, the earth was not. After it, the earth was a planet, like Mars and Jupiter; the sun was a star; and the moon was a new sort of body, a satellite. Changes of that sort were not simply corrections of individual mistakes embedded in the Ptolemaic system. Like the transition to Newton’s laws of motion, they involved not only changes in laws of nature but also changes in the criteria by which some terms in those laws attached to nature […]

One brief illustration of specialization’s effect may give this whole series of points additional force. An investigator who hoped to learn something about what scientists took the atomic theory to be asked a distinguished physicist and an eminent chemist whether a single atom of helium was or was not a molecule. Both answered without hesitation, but their answers were not the same. For the chemist the atom of helium was a molecule because it behaved like one with respect to the kinetic theory of gases. For the physicist, on the other hand, the helium atom was not a molecule because it displayed no molecular spectrum. Presumably both men were talking of the same particle, but they were viewing it through their own research training and practice. Their experience in problem-solving told them what a molecule must be. Undoubtedly their experiences had had much in common, but they did not, in this case, tell the two specialists the same thing. As we proceed we shall discover how consequential paradigm differences of this sort can occasionally be.

John Schilling notes that I left out part of the story in my explanation of Copernicanism and stellar parallax. The problem wasn’t just that the medievals assumed the stars were close. It was that they appeared to be discs rather than points, which ought to imply close proximity.

absence of parallax isn’t a “glaring flaw” in Copernican theory; it’s only the combination of immeasurably small parallax and large apparent diameter of the fixed stars that is a glaring flaw. A finite diameter implies a finite distance, particularly with the reasonable assumption that stars are the same class of object as the Sun, and the stellar diameters measured by 16th and 17th-century observers corresponded to distances incompatible with the parallax measurements of those observers.

This discrepancy could be resolved by better parallax measurements, or by better measurements of stellar diameter. And in fact, it was in 1720 that Halley used stellar occultation to show that the observed disks were optical anomalies and stellar angular diameter was immeasurably small – thus stars were immeasurably distant and could have immeasurably small parallax.

As you note, it was not long after this (but see also James Bradley and aberration) that the Tychonic model was finally done away with and the Heliocentric model became dominant.

Frog-Like Sensations writes:

It’s natural to find Kuhn’s metaphysics unclear since he was completely unclear about his metaphysics in Structure, and he spent much of the remainder of his career attempting to get clearer on it. Here’s one of the last things he wrote about this:

By now it may be clear that the position I’m developing is a sort of post-Darwinian Kantianism…Underlying all these processes of differentiation and change, there must, of course, be something permanent, fixed, and stable. But, like Kant’s Ding an sich, it is ineffable, undescribable, undiscussible. Located outside of space and time, this Kantian source of stability is the whole from which have been fabricated both creatures and their niches, both the “internal” and the “external” worlds. Experience and description are possible only with the described and describer separated, and the lexical structure which marks that separation can do so in different ways, each resulting in a different, though never wholly different, form of life. Some ways are better suited to some purposes, some to others. But none is to be accepted as true or rejected as false; none gives privileged access to a real, as against an invented, world. The ways of being-in-the-world which a lexicon provides are not candidates for true/false. (“The Road Since Structure”, 12)

Now, you may wonder how you can possibly make something clearer by saying it is a form of Kantianism, and as a non-Kant-scholar, I understand the feeling. But here’s my best stab at what’s going on here.

The most distinctive feature of Kant’s metaphysics is that he claims that a large number of things that are ordinarily claimed to be features of mind-independent reality — that is, of the world as it is in itself as opposed to how it is as represented by minds — are actually features of how our minds must represent the world. This includes both the obvious things, like color, and some really surprising things, like causality and the nature of space and time. So things in themselves do not enter into causal relations or exist in space and time, but they still exist and ultimately ground the nature of the world as it appears to us.

Kant’s view is not relativistic because (1) he thinks that the particular facts that are part of the world of appearance are (non-causally) determined by the nature of mind independent things (the “Ding an sich” mentioned above), and (2) he thinks that all minds impose the same kind of structure on the world (e.g., causal and with space and time).

Kuhn’s proposal is to reject the second claim. Instead of minds all imposing the same type of structure on the world, Kuhn suggests that changing paradigms can impose their respective structures on the world. There is still a mind-indpendent reality that in some way determines how things appear to us and also constrains how successful a given paradigm can be. But all the things that differ between paradigms concern only the features of our representation of reality. Mind-independent reality does not contain any of the relevant properties and so does not settle things one way or another, except insofar as it somehow renders one paradigm more useful than another at solving particular puzzles.

Anyway, I don’t find this view particularly appealing, but it’s the most coherent thing I’ve managed to get out of Kuhn.

I have to admit I have some of the same confusions about Kant as I do about Kuhn. I understand Kant as saying that because we see the world through the mediating influence of our mind, we can never know anything about true reality.

I agree that we see the world through mediating influences, but I’m not sure how far he wants to go with the “never know anything about true reality” piece. For example, I believe I have a car. Can I say with some confidence that true reality contains an object corresponding to my car? That it really and truly has four wheels? That its gas tank is half full? That its interaction with my sense organs explains why I so consistently get such nicely-structured car-related sense-data?

Sure, you can say something boring like “wheels are a social construct, really there are just rubber molecules in a cylindrical pattern”, or even “rubber molecules and shapes are both social constructs, in reality there’s only blobs of quantum amplitude on a holographic boundary entity”, or even “in reality there’s something as far beyond quantum amplitude blobs as quantum amplitude blobs are beyond wheels”. But you can say this kind of thing without Kant, and we just shrug it off as “Yeah, on one level that’s true, but I’m right about the wheels too.” Does Kant have anything to add to this?

One nice thing about the subreddit’s karma system is that it makes it easier for me to figure out who to highlight here. The top-voted comment was by ArgumentumAdLapidem:

This book is near and dear to my heart. As a young ArgumentumAdLapidem, a undergraduate physics major, I was really feeling my oats, and taking some upper-level history classes, just to prove I could do it. For some reason, some poor post-doc was assigned to do recitations, and got me, and I was STEMlording, as young STEMlords are wont to do. He gave me Kuhn to read. I read it, then bought it, then read it again. I had the same conclusion as SSC’s initial premise: this book is a fairly trivial description of the history of science. Lots of dirty laundry, to be sure, but nothing earth-shattering. He, of course, disagreed, and thought the book decisively proved that science was dethroned as the one-true-pursuit-of-Truth. Sadly, this story ends here, there was never a meeting of the minds. Reality intervened, there were finals to study for, and a wildly-overambitious lab project to complete.

But I still have that book. Actually, I have two copies, as someone else, unbidden, gave me a copy as well. Apparently history-of-science grads and philosophy-of-science grads hand them out to physics grads like garlic to vampires. (I readily admit, this might be a commentary on my former and/or current arrogance.) Over the years, I’ve thought about how I would have had that conversation differently. Here’s the current iteration:

To build a skyscraper, we need a foundation. The ultimate weight, volume, and height of the skyscraper is limited by the strength and soundness of the foundation. Science operates in a similar manner … the scope, accuracy, and detail of the scientific project is ultimately limited by the fundamental soundness of the model. The overall history of science, then, is the successive abandonment of one skyscraper for a bigger and better one, one with a stronger foundation, which allows the tower to reach greater heights.

But the devil is in the details, and Kuhn lays them out.

— There are people who have corner offices in the old skyscraper who don’t want to leave. They like their social status in this building, and they discourage (or punish) people who leave the building. They belittle people trying to build a new one.

— It’s not obvious, when the new foundation is being laid, that it will be any better or stronger than the existing one. You have to build the skyscraper (run the experiments) to find out.

— There are a lot of abandoned foundations laying around. They developed cracks, were built on unsuitable ground, or were otherwise deficient in some way that wasn’t discovered until they actually tried to build something on top of it. Most new scientific models fail. There are fads – some hot new model will attract a lot of attention, but begins to fade when it doesn’t show results. The scions of the current building can point to all the failure around them and confidently predict this new attempt will fail as well.

— As the skyscraper is being built, it’s not a smooth process. There will be mistakes and partial rebuilds. Most of the the time, the new building will be a piecemeal framework of exposed structural beams, and will spend most of its time being shorter and less comfortable than the old building. The corner offices of the old building will look out their windows, see a tangle of metal and sweat in the construction site below them, and chuckle at their naive enthusiasm.

— The old building does still grow. There are remodels, things get slicker, more polished, expansions are added, maybe another floor is added. But the foundation can still only take so much, and can only be reinforced to a certain extent. Epicycles.

— The new building has new problems the old building didn’t have. The fire suppression system needs more powerful pumps to push water to ever higher floors. The doorman who just knew everybody has been replaced by a keycard authentication system that is confusing and annoying. These look like flaws to people in the old building, rather than the necessary scaffolding for a bigger, better building. The flat-earther model “Earth must be flat because look how far I can see”, which is simple, must be replaced with the more powerful “Earth is round, and, in a vacuum, you wouldn’t be able to see that far, but we must account for atmospheric refraction, here’s some corrections.” Annoying. But it isn’t just replacing one problem for another. The old problem was a fundamentally-limiting contradiction in the basic model that couldn’t be solved without scrapping the model. The new problem might be solvable. You won’t know until you try. You have to build the building to know.

— There’s a perception problem. The old building holds the height (truth) record basically until the new building reaches the height of the old one. Then the record goes to the new building, and the perception shifts – if you want to be in the game, you got to be in the new building. Some observer, watching the endless parade of people suddenly moving their boxes to the new building, concludes this is all just fad-chasing, like socialites flocking to the hottest club. They’re just doing whatever is popular with the other scientists.

So yes, all this is true. But, after all those failed attempts, all that drama, all that sneering and popularity-games, the skyscrapers still do get taller. As SSC notes, Kuhn barely admits this, in a whisper, on the last page. It is no wonder then, that this book has been used to represent claims far beyond what Kuhn actually claims.

And MoreDonuts on Kuhn vs. Popper:

The other simplistic view [Kuhn] was arguing against was Popper’s notion of falsification. In fact, falsification was the legal precedent for the definition of science at the time, in spite of the fact that philosophers of science never considered it very seriously.

Kuhn’s view also answers the question of why falsification has always been popular among scientists on the ground. When a field is performing “normal science” under a particular paradigm, the acceptance of particular facts or pieces of theory largely does resemble falsification: either the new proposal fits the evidence under the paradigm, or it does not. Kuhn (and Feyerabend) show how this simplistic model falls apart when comparing between paradigms, because there is no way to agree upon what constitutes falsification.

Philosophy of science is controversial because the core conclusion is largely unavoidable: “science” is simply a set of human institutions. There is no hard philosophical grounding for scientific truth. This was an unpopular conclusion historically because Christians were still trying to push Creationism, and progressives needed some argument for why scientific institutions were right and Christian institutions were wrong (the real answer, unironically: our people are smarter and less biased).

A couple of people commented that Kuhn was overstating things because Einstein just expanded upon Newton – a friendly amendment, if you will. Kingshorsey explains (using similar arguments to Kuhn himself) why this isn’t quite right:

I think there are two important lessons to take away from Kuhn: 1) the gap between our ability to model phenomena and our ability to explain those phenomena can be uncomfortably large; and 2) the perceived amount of empirical advantage provided by a new paradigm is not necessarily commensurate with the amount of conceptual adjustment its adoption will require.

A user on the SSC site said that the move from Newtonian to Einsteinian physics is more of a paradigm shuffle than a paradigm shift, because Newtonian equations still work perfectly well for all kinds of calculations. To reframe this user’s statement in terms of point 2, this user thinks that because Einstein’s calculations empirically differ from Newton’s in only certain restricted cases, Einstein’s paradigmatic/theoretical challenge to Newton must be similarly small.

But that’s taking an unreasonably narrow view of what constitutes Newtonianism and Einsteinianism. Neither Newton nor Einstein produced equations in a conceptual vacuum. Rather, both embedded them within a cosmology that rendered them intelligible.

To Newton, space was absolute and yet non-substantive, just the distance between objects. Time was uniform and absolute. Gravity operated instantaneously apart from mediation. Newton believed that these cosmological assertions were necessary for his physics, and that in turn his physics supported these cosmological assertions.

When Einstein comes along, he overturns everything Newton thought about the nature of the universe. Space and time are no longer to be regarded as merely formal properties “within” which things move. Time is relative, space and time are intertwined, and space-time is the very “thing” of which gravity consists.

If we accept both that Einstein’s cosmology is better and that Newton’s math is still pretty good (rather than junk science), we are left with an uncomfortable conclusion. Newton’s degree of success at modeling phenomena in motion did not correlate strongly with his degree of success at explaining the structures or characteristics of reality responsible for that phenomena.

This in turn should lead us to question how much the success of Einstein’s math really supports the cosmology that is bound up with it. After all, what’s to stop a future physicist from saying, “Thanks for these equations, Einstein, I’ll use them where I can, but it’s a shame your model of reality was all wrong”?

And that’s why Kuhn is interesting, and comforting, and frightening. The conservation of certain observations through paradigm shifts forces us to reckon with the possibility that our own scientific successes may one day find a home in a model of reality entirely other than what we imagine now.

Jadagul has a whole blog post worth of comment.

And SpinyStellate doesn’t have much to say about the book, but recommends to us their project SciDash, “rigorous, reproducible, extensible, data-driven model validation [and visualization] for science”. I haven’t looked at it enough to entirely get what’s going on, but at least check it out for its cool visualization of geocentrism vs. heliocentrism (complete with p-values)!

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

80 Responses to Highlights From The Comments On Kuhn

  1. Michael Watts says:

    I guess I’ll comment further on the subject of my remark about the dormitive potency paradigm in medicine.

    I don’t see why the saying should be considered a joke that if you don’t know what’s wrong with the patient, you repeat their symptoms back in a way they won’t understand. For one thing, the empirical evidence appears to be that this is really what doctors do. For another, I don’t see what the alternative to repeating the symptoms is. We’ve already stipulated that you don’t know anything other than the symptoms. Doctors are viewed as authority figures, and an important part of maintaining authority is appearing to know what you’re doing.

    But there’s another issue in play: a lot of people don’t seem to want anything more from their doctor than attention.

    In China, there is a belief that getting a shot is inherently healthful. If you have a cold (yes, a cold), you might stop by the hospital for an injection of saline solution. I find this practice horrifying and I have a hard time understanding why the doctors go along with it. The reaction of Chinese people I’ve discussed it with varies. I first learned about this from a high school student who didn’t like being taken to get a shot for minor illnesses. He was pretty receptive to my Western message that the whole thing is a sham.

    But I was surprised later to see a girl complaining on social media that she’d gotten a cold and gone for the customary treatment. Since she was in America, they told her “you don’t need a shot”. And instead of being happy about this, she was upset that, in her words, they didn’t seem to care that she was sick.

    If you have a patient of this type, then repeating their symptoms back to them in medicalese is what they came to you for.

    • Scott Alexander says:

      I’m less pessimistic than you.

      If I remember your original comment, you were talking about lupus, and your complaint was that since nobody has a 100% perfect understanding of what causes lupus, and it’s defined as a collection of symptoms, if I say all those symptoms and my doctor says “lupus” they’re just repeating it back to me.

      First of all, this isn’t quite true – if it were, then sites like WebMD would work perfectly, instead of always convincing people they have random cancers. Separating out the noise of “the symptoms we should be worrying about now” from “everyone has a few random things wrong with them at any given time” is hard, which is why websites (and doctors) so often fail to make lupus diagnoses correctly (or will diagnose people with lupus even when they don’t have it).

      Second of all, a real lupus diagnosis will take into account not only symptoms but a whole host of things like blood test results, demographics, and how they respond to medications. This isn’t in the official definition, but it will get used.

      Third of all, “lupus” is a category on which we hang other pieces of knowledge. Once we know you have lupus, we have some idea whether you should ignore it vs. start writing a will, what lifestyle factors might improve or worsen it, what medications might treat it, what sort of later complications you have to watch out for, whether your kids are likely to inherit it, et cetera. This is vastly better than just knowing “I have such and such a symptom”.

      There is a world of difference between having a cough+runny nose, and having a cold (even if this is defined as “having a cough plus runny nose”.) For one thing, having a cold means your symptoms aren’t being caused by throat cancer. For another, it means they’ll go away after a week or two.

      In fact, isn’t this the way everything works? I see a weird yellow flying bug in my house and I’m clueless. An entomologist sees the same flying yellow bug and recognizes it as a chrysohymenoptera. Sure, that just means “flying yellow bug” in Greek, but if she knows the species she’ll probably know other things like whether it’s poisonous, whether it’s rare or common, whether it’s an invasive species, whether it’s disease-carrying, what it eats, what eats it, and all sorts of other interesting things that might be useful. And if I ask the entomologist “What is this?” and she responds “chrysohymenoptera”, I’ve learned something useful and she’s done me a valuable service.

      • Michael Watts says:

        Third of all, “lupus” is a category on which we hang other pieces of knowledge. Once we know you have lupus, we have some idea whether you should ignore it vs. start writing a will, what lifestyle factors might improve or worsen it, what medications might treat it, what sort of later complications you have to watch out for, whether your kids are likely to inherit it, et cetera. This is vastly better than just knowing “I have such and such a symptom”.

        This is a good and valuable point, about which more later. But first,

        In fact, isn’t this the way everything works? I see a weird yellow flying bug in my house and I’m clueless. An entomologist sees the same flying yellow bug and recognizes it as a chrysohymenoptera. Sure, that just means “flying yellow bug” in Greek, but if she knows the species she’ll probably know other things like whether it’s poisonous, whether it’s rare or common, whether it’s an invasive species, whether it’s disease-carrying, what it eats, what eats it, and all sorts of other interesting things that might be useful. And if I ask the entomologist “What is this?” and she responds “chrysohymenoptera”, I’ve learned something useful and she’s done me a valuable service.

        You haven’t learned anything until you ask one of those followup questions, like “is it poisonous?” (If the bug is dangerous, you could reasonably expect a socially normal entomologist to volunteer that information unprompted; I would not expect the same for whether it was an invasive species.)

        In my mind, the paradigm being made fun of is that you answer a question by restating the question, an interaction with zero informational content. Stated this way, identifying an example relies on identifying what question is being asked. If I go to the doctor and say “I get headaches at night for no obvious reason, what’s wrong?” and the answer is “you have idiopathic nocturnal cephalgia”, that doesn’t help at all. All of that information was present in the question.

        If I go to the doctor and say “I get headaches at night for no obvious reason. Am I going to die?”, and the answer is “almost nobody dies after complaining of chronic headaches”, that answer is helpful. But that’s because I asked a different question, one to which the answer was known. If I follow up by asking “okay, I’m not going to die, but what I really want is to stop experiencing this excruciating pain”, and the answer is “you have idiopathic nocturnal cephalgia”, I’m going to lower my opinion of doctors because they couldn’t help me with my problem.

        If I go to the doctor and say “I get headaches at night for no obvious reason”, with an unstated “and what I really want is for someone to recognize that I’m suffering”, then the answer “you have idiopathic nocturnal cephalgia” perfectly addresses my needs.

        So no, I don’t think this is the way everything works. I think this is the way everything works when you ask a question of someone who doesn’t know the answer (and that medicine is prone to this because there are so many questions that (1) people care about and (2) nobody knows the answer to). If I start building a big pyramid in ancient Egypt, it may stand tall and proud or it may start to collapse in on itself. In the second case, I go to an engineer and he tells me “your pyramid is experiencing subsidence”. This is the dormitive potency paradigm if he stops there, but it’s useful if he explains that subsidence is caused by building on a layer of sand that is too deep, and I need to relocate my efforts to an area with shallower sand over bedrock.

        If answering questions by giving the situation a special name and calling it a day were the way everything worked, we wouldn’t be making fun of “trees grow because that’s what trees do” in the first place.

    • Aapje says:

      @Michael Watts

      I think that a major reason why people want a treatment, even if nonsensical, is the same reason why they engage in superstitious behavior: they want control over what happens to them. So they knock on wood to avoid tempting fate and want a shot or pill so they feel that they are in control of their healing process.

      It’s probably a side effect of an immensely beneficial human trait: our desire to make desired outcomes happen through our own actions.

      • Peter says:

        It’s probably a side effect of an immensely beneficial human trait: our desire to make desired outcomes happen through our own actions.

        There’s an old set of wargaming rules, the army lists include notes for each army. One has the line “The best Sarmatian tactics is a massed impetuous charge as soon as the enemy come within reach. The troops will probably do this anyway, so you might as well order it and accept the credit for its success.”

        …which on reflection is a little stronger than merely “feeling in control” – it’s about being able to present yourself as being in control without looking or feeling too much like a liar.

    • zarc says:

      Receiving a saline shot for a cold is a lot less horrifying than being prescribed antibiotics for one.

    • acerti says:

      I think this is a misunderstanding of the what the “dormitive potency” paradigm is. It’s a philosophical question about explaining how things work at all, not directly a medical practice question. But imagine you had some sort of wolf-child in your medical practice, suffering from insomnia, who had never encountered the concept of pills or germs. After your difficult explanation of who you are, how do you explain what the pill is? Does “this has the power to make you go to sleep” explain anything about the pill at all, or do you need to go to agonistic ligands and opioid receptors? (ignoring the issue of opium being an appropriate insomnia treatment)

      To say “Opium causes sleep because it causes sleep” would be a tautology, but the statement in question says more than that. It says that opium has a power to cause sleep; that is to say, it tells us that the fact that sleep tends to follow the taking of opium is not an accidental feature of this or that sample of opium, but belongs to the nature of opium as such. That this is not a tautology is evidenced by the fact that early modern thinkers tended to regard it as false, rather than (as they should have done were it really a tautology) trivially true. They didn't say: "Yes, opium has the power to cause sleep, but that's too obvious to be worth mentioning"; they said: "No, opium has no such power, because 'powers,' 'final causes,' and the like don't exist".


      This is from Edward Feser, Aquinas. I don’t think the above quote alone really explains what he’s getting at, but I thought of it because of the dormitive potency thing:

      
Part of the reason the Aristotelian regards efficient causality as unintelligible without final causality is that without the notion of an end or goal towards which an efficient cause naturally points, there is no way to make sense of why certain causal chains are significant in a way others are not…the formation of magma might cause some local birds to migrate…but causing birds to migrate are not part of the rock cycle… some causal chains are relevant to the cycles and some are not… Actual experimental practice indicates that what physicists are really looking for are the inherent powers a thing will naturally manifest when interfering conditions are removed, and the fact that a few experiments, or even a single controlled experiment, are taken to establish the results in question indicates that these powers are taken to reflect a nature that is universal to things of that type. 


  2. Frog-like Sensations says:

    To reiterate, I’m not a Kant scholar, so what follows is just what I’ve picked up from being around Kant scholars and from the course I least internalized while a philosophy graduate student.

    I agree that we see the world through mediating influences, but I’m not sure how far he wants to go with the “never know anything about true reality” piece. For example, I believe I have a car. Can I say with some confidence that true reality contains an object corresponding to my car? That it really and truly has four wheels? That its gas tank is half full? That its interaction with my sense organs explains why I so consistently get such nicely-structured car-related sense-data?

    Sure, you can say something boring like “wheels are a social construct, really there are just rubber molecules in a cylindrical pattern”, or even “rubber molecules and shapes are both social constructs, in reality there’s only blobs of quantum amplitude on a holographic boundary entity”, or even “in reality there’s something as far beyond quantum amplitude blobs as quantum amplitude blobs are beyond wheels”. But you can say this kind of thing without Kant, and we just shrug it off as “Yeah, on one level that’s true, but I’m right about the wheels too.” Does Kant have anything to add to this?

    I’ll assume you mean the same thing by “true reality” as Kant does by “things-in-themselves”. Like most things related to this, Kant thinks we cannot know whether there is a particular thing-in-itself corresponding to each object of appearance, like your car. But even if there is some thing-in-itself that corresponds to your car, we can say that it must be very much unlike the car as it is experienced.

    Your car does have four wheels for Kant, but if “truly” having four wheels requires not just having four wheels but mind-independently having them, then he will deny this. More generally, although both Kantian things-in-themselves and the fundamental objects posited by physics have been taken to ground the nature of the observable world, you can easily be led astray by thinking of the former too much along the lines of the latter. Fundamental physical objects exist in space and time and enter into causal relations. Things-in-themselves do not.

    And Kant doesn’t just think we cannot know whether things-in-themselves do this. He thinks we know that they do not. There is a great deal of debate in Kant scholarship about how this works, dating back to Kant’s lifetime. Why can’t space and time not only be structures that our minds impose on experience but also (perhaps by coincidence) be part of the structure of the mind-independent world? This possibility is called the “Neglected Alternative” in Kant scholarship. But whether or not Kant was right to rule it out, he does, and so we can say that according to Kant there is no mind-independent object that has four wheels in the same way your car does.

    So that might count as something Kant “adds” to the otherwise common idea that observable things are grounded in a bunch of weird stuff some science.

    Tying this back to Kuhn, it seems like his proposal is similar in this respect. He denies that the fundamental structures found in scientific paradigms are “candidates for true/false”. Presumably he can only do so if, like Kant, he thinks we can know that the mind-independent world lacks the kind of properties described by scientific theories, rather than if he merely thinks we cannot know one way or another. If the mind-independent world could have such properties, then presumably the paradigm that described the world as having those properties it does have would be true.

    • Scott Alexander says:

      I’m interested if you can refer me to any (less than book-length) treatments of why Kant thinks this. If we’re asking “why does every map of the territory so consistently have this feature?”, then “because the territory contains something corresponding to that feature” seems like it should at least be a strong possibility.

      • Frog-like Sensations says:

        This is difficult because, in anything shorter than a book, Kant scholars tend to presuppose a huge amount of impenetrable Kant jargon.

        That said, here’s one recent article I just found defending Kant’s denial of the Neglected Alternative

        I think section 3 of this SEP does a better of job of explaining the distinction between interpretations of Kant’s metaphysics that plays a central role in the above, and it’s fairly short, so I’d recommend that section as a supplement to the first link.

        Finally, here’s a recent dissertation on the issue. Obviously, this is much too long. But mostly just the first and (to a lesser extent) last chapters are relevant, and unlike the other links it starts from the bottom up in explaining Kant jargon.

      • jadagul says:

        I can’t speak for Kant here at all (among other things, I’m substantially less familiar with his work than with Kuhn’s, and I’m hardly a Kuhn expert). But I can give two potential answers here.

        First is a sort of linguistic answer. “Truth” is a predicate for _sentences_, not for facts. “I own a car” is true, but my car isn’t true, because what would that even mean? The world _is_, but the world isn’t true or false.
        But sentences all happen in some language, and language is only coherent within some shared social context, so you can’t evaluate truth outside of any context. It’s just a category error.

        Second is a more substantive answer. The fact that all maps have some common feature might tell you something about the territory, but it might also tell you something about the mapmakers. If every map you’ve seen shows the earth as being flat, that doesn’t mean the earth is flat. (And if every map you’ve seen is two-dimensional, that doesn’t mean the world is two-dimensional).

        As a better example, until _very recently_ none of our maps included weird quantum bullshit. That doesn’t mean that no weird quantum bullshit exists; it means that we have a tendency to not interpret things as being instances of weird quantum bullshit, because we’re just not wired that way. And maybe there’s something weirder that we’ll never be able to reason about coherently. (The whole “acausal physics” thing I talk about elsewhere is a good candidate).

        But the problem is, you can never tell if your maps all agree with each other because of something in the territory, or for some other reason, because all you have is the maps. You can assume the maps are reflecting something real, and I would guess they are, but you can’t _know_ that.

        (Compare: the idea that 1 isn’t a probability because there’s always some chance that something in the chain of evidence is wrong).

        Now, I don’t recommend worrying about this too much for the most part. Our maps seem to basically work, and our maps have sufficient consensus about enough things that we can mostly communicate. (Although this breaks down sometimes! One reason it’s so hard to talk to argue with conspiracy theorists is that they have maps that _don’t show the same things yours do_, on a fairly deep level. They don’t perceive the same facts).

        But if you’re asking what we can really know for complete certain, the answer is nothing. And if you’re asking how much we can perceive directly, unmediated by our filters and biases and maps, the answer is nothing.

        • Sebastian_H says:

          More likely it is because weird quantum bullshit doesn’t manifest on the level of most actual maps that we care about. If you’re talking about how to drive from here to a national forest, electron tunneling on the road isn’t going to effect you at all.

          Which is exactly what I think about quite a bit of philosophical hand wringing about relativism and “certainty” about truth. Academia loves to get sucked into edge cases. They are interesting! But they aren’t the main cases either. Sometimes you get to be Einstein, and the edge case reveals something important about the main case. But usually you aren’t, and when you try to elevate the edge case to the main case you end up only confusing yourself.

          Yes some things have socially constructed parts. No it often doesn’t matter as much as people think, because the real parts shine through anyway.

          • Loris says:

            Ah, but the forest doesn’t exist unless you go there to observe it.
            Says Kant, high-fiving Schrödinger.

          • woah77 says:

            This line of reasoning highly supports “the world is a simulation” theories.

          • jadagul says:

            I didn’t say it mattered! (Sometimes it does, and sometimes it doesn’t, as you say).

            But if the question is “why can’t we, technically, really know objective truth?” then that’s the answer. If you instead ask the question “how should we respond to this fact?” the answer is mostly “acknowledge that we can never be completely certain of anything, and then go on reasoning as best we can and ignoring things we can’t reason about.”

            Whatever we’re doing works reasonably well, even though we can’t Objectively Justify it from First Principles. So there’s no reason to stop.

        • Michael Watts says:

          First is a sort of linguistic answer. “Truth” is a predicate for _sentences_, not for facts. “I own a car” is true, but my car isn’t true, because what would that even mean? The world _is_, but the world isn’t true or false.
          But sentences all happen in some language

          In another language, calling your car “true” might mean that the BMW seal it displays was affixed by BMW after they manufactured the car, as opposed to because you took one off of someone else’s true BMW and welded it to your crappy car yourself. “True” and “real” are often the same word.

          Even in English, I wouldn’t be surprised to see someone calling the world “false”. You wouldn’t contradict them by calling the world “true”, but that’s more of an artifact of the usage of the word “true” in English than an inevitable consequence of the semantics.

          • jadagul says:

            But those are different meanings of the word “true”!

            The fact that two words have the same spelling and pronunciation and etymology doesn’t make them the same word. This is really obvious for technical terminology; when I say a module is “flat” that doesn’t mean it will fit under a door, or that water won’t flow off of it. But it’s true in general.

            Google’s dictionary search offers four definitions of “true” as an adjective, two as an adverb, and one as a verb. And that’s not really an exhaustive cataloging.

            The word “true” isn’t doing the same work in the sentence “that is a true BMW” that it’s doing in the sentence ” ‘That is a BMW’ is true”. Those are two different meanings of the word true.

            (You can see this, as you say, by lookign at antonyms. You might say “This is a true BMW”, although I suspect you’re more likely to say “This is a real BMW” or “This is really a BMW” or something; but you probably wouldn’t say “This is a false BMW” instead of “This is a fake BMW”.

            Similarly, people do say the world is “false”, and I might even accept disagreeing and saying the world is “true”—but that’s in a different sense of “true” as “loyal or faithful”, not “true” as in “a true sentence”. Obviously, because the world isn’t a sentence!

      • Basically the idea is that things can’t be the way they seem, because “the way they seem,” by definition, is a way of seeming, not being.

      • Aging Loser says:

        Kant thinks this because it’s a cool new way to package Neo-Platonism. Schopenhauer was right in reading Kant as holding that the Thing in Itself (the One) represents itself and that’s all there is. Kant could have said “the Cosmic Bud opens into a thousand-petaled spangled blossom” but that wouldn’t have sounded German enough.

    • Sniffnoy says:

      Fundamental physical objects exist in space and time and enter into causal relations.

      Well, it’s worth noting here that if we take seriously, say, the ontology of QFT and the standard model (and obviously we shouldn’t actually do that because it’s not actually a theory of everything and we’re presumably at least one paradigm-change away from that still, to the extent that that’s meaningful; but it’s not like we have anything better at the moment and it illustrates my point all the same, so I’m going to go with it), the mathematical objects that are physically fundamental are about a dozen or so fields that don’t resemble physical “objects” in the usual sense at all. The things we normally think of as objects, like cars or electrons, are just patterns within these. Obviously that something as complex as a car is not ontologically fundamental, and just exists as a pattern within whatever structure is, hardly needs to be stated (or at least here it doesn’t 😛 ); but I think the idea that actually this applies to all objects-in-the-intuitive sense, because object-in-that-sense is a specific intuition that evolved for dealing with the macroscopic world, and the objects-in-the-mathematical sense that are actually fundamental don’t fit this idea of object at all — they’re not things each with a location in spacetime, they’re functions that assign values to points in spacetime — is worth stating. (And the ontology for an actual correct theory of everything is liable to be even less intuitive.)

      • Frog-like Sensations says:

        I should probably be more open to the idea that the metaphysics of candidates fundamental physical theories is comparable in weirdness to that of Kant. But if nothing else, the fact that Kant claims his weird view is derivable a priori should give it the leg up.

        And at least the state of the world-grounding fields at one point in time causes the states of those fields at later times for QFT (right?).

        • Sniffnoy says:

          > And at least the state of the world-grounding fields at one point in time causes the states of those fields at later times for QFT (right?).

          Pretty sure that yes it’s deterministic (if you take a many-worlds-like view that is — obviously not otherwise!), i.e. earlier states determine later ones, though this is really not my area — although I suspect that (like with Newtonian physics) to make this work you may have to reify time derivatives that otherwise don’t make sense? IDK, not my area like I said. (Of course whether earlier states cause later ones is potentially another question… 😛 )

          But assuming I’m right I don’t think the determinism is, like, obvious — it’s a property you can prove, but not something, like, built in. Same goes for general relativity. Same goes for Newtonian physics, really, if you don’t know about existence and uniqueness theorems for solutions to differential equations! Of course that’s assuming Newtonian physics is deterministic, which depending on what you allow as inputs it actually isn’t necessarily…

          I guess what it obviously is, if not deterministic, is, like, mechanistic, where even if it requires reifying derivatives, the present still in some sense obviously causes the future, not just determines it. Whereas the theories that have replaced it don’t seem to have that, is my point; determinism is there but whether one would call it the present “causing” the future is not so clear.

          • jadagul says:

            Depending on how you interpret physics fundamentals, physics might really not be causal. First because if superluminal signalling exists due to quantum phenomena or an alcubierre drive or whatever, then consistent causality isn’t possible.

            And second, because a deterministic reversible physics—like, say, many-worlds QM—doesn’t really need causality in it anywhere. Sure, knowing the complete state at time T tells you the complete state for time T+1; but knowing the complete state for time T+1 tells you the complete state for time T, as well.

            In normal life, we have to think of things as being causal. But we don’t have any reason to think that physics needs to be causal on its own. There’s a reason that “why are the laws of physics asymmetric with respect to time?” (the “arrow of time” problem) is considered a major open question.

    • MH says:

      From what I remember from years ago the ‘transcendental idealism/empirical realism’ distinction for Kant does a lot of work here, and a lot of the ok-but-what-about-noumena questions result from mixing that up.

      One of the main reasons Kant didn’t talk about the things-in-themselves much was that he didn’t think you could talk about them. It’s not just that you can’t know them in the way that you can’t know what the back of your head looks like at any given time (not generally but with enough trickery maybe or at least enough to have a sense of it).

      The arguments about structural features of experience weren’t empirical ones – it wasn’t that he had poked around and discovered that reality was this way and experienced reality was that way.
      It was that experienced reality was the only one we get to live in (obviously a priori), and that for anything to count as an experience of reality it has to be structured in some way (because a priori experiences are things structured that way*), and that any conceptual structures or concepts relating to it were also structured that way (at least partially as a result), and so any sense of outside-of-experienced-reality was at best a kind of if-we-could-talk-about-it-but-we-can’t sort of thing*.
      Or in other words, reality itself is structured causally because reality could only be experienced reality (there’s no view from nowhere), and you can’t even talk about super-reality in any sensible way or conceive of it in any way, and attempts to do that are a sort of category mistake. It’s at best just our term for the point at which language stops meaning things and concepts stop being about things.
      (There may be some sense in which we can, by analogy or something, talk about it being a line between two separate sorts of things, but not one that lets us ask talk about the other side, or even if there really is one. We can’t talk about whether things-in-themselves are related causally, because we can’t even go as far as attributing properties like number to ‘them’.)

      *There is, I am told, one, fairly substantial, section of the Critique of Pure Reason which is written wholly in the subjunctive. I am also told that German philosophy students often prefer to read English translations of Kant (or whatever language they read well enough in) rather than in the original. These two things may be related.

  3. warrenmansell says:

    My view is that we are still in a behaviourist paradigm within psychology despite Freud, Rogers, Beck & Friston. All of the schools of thought still assume that behaviour is the end of a causal chain, even if the intermediate psychological processes are perceptual, cognitive, or phenomenological. The paradigm is to realise that living organisms only follow cause-and-effect laws at the physical and chemical level, not at the biological and psychological levels. At these levels, living things are purposeful in controlling their own sensory inputs and they will vary their actions to do so, dynamically, through a closed loop with their environment. In short, that paradigm shift is Powers’ (1957, 1960, 1973, 1992, 2005, 2008) perceptual control theory.

    • jp says:

      My view is that we are still in a behaviourist paradigm within psychology despite Freud, Rogers, Beck & Friston. All of the schools of thought still assume that behaviour is the end of a causal chain, even if the intermediate psychological processes are perceptual, cognitive, or phenomenological.

      Not really. The Cognitive Turn has led to a great split here. To a Chomskian, the externalisation of cognition is often just an annoyance when studying the actually interesting thing: mental structures. I.e., to them, it kind of matters what people say and do because that is sometimes what you need to look at to figure out what actually matters: mental structures. But only as means to that end; the structures that don’t result in behaviour (=externalisation) are just as interesting as the ones that do.

      On the other hand, the behaviourist school does not really bother with those mental structures all that much. Given that they are of immense interest to a lot of psychologists, it is simply not true that “we are in a behaviourist paradigm”.

    • Aging Loser says:

      “All of the schools of thought still assume that behaviour is the end of a causal chain” — If behavior isn’t the end of a causal chain then people should stop writing biographies and more generally should stop trying to explain why anyone ever did anything at all and should stop trying to anticipate what people will do.

      • jp says:

        “All of the schools of thought still assume that behaviour is the end of a causal chain” vs. “All of the schools of thought still assume that behaviour is the result of a causal chain”

        The proposal is to think of loops rather than (terminating) chains.

  4. jp says:

    I want to repeat my recommendation for a series by Kuhn’s former student and now famous documentary film maker Errol Morris:

    https://opinionator.blogs.nytimes.com/2011/03/06/the-ashtray-the-ultimatum-part-1/

    It’s perhaps not as deep and well-reasoned as Kuhn himself, but still very thought provoking at times, and quite enjoyable to read. Epistemology is a big word (even on a very smart blog about epistemology), but this one’s got the cute anecdotes to make up for it!

    And now, there’s also a book:
    https://www.press.uchicago.edu/ucp/books/book/chicago/A/bo14057587.html

  5. Ben Wōden says:

    “The major advance in meta-science from about 1980 to 2000, imo, was realizing that molecular biology, e.g., works so differently from fundamental physics that trying to subsume both under one theory of science is infeasible.”

    This is a really key insight, and one that is quite nicely explained in “Theory and Reality” by Peter Godfrey-Smith, who is a philosopher of science with a background in molecular biology. The book is excellent in combining a good summation of the history of philosophy of science with an outline of the author’s own position on the key matters (he’s a scientific realist of sorts, but of a type that’s not that far removed from instrumentalism in terms of its practical ramifications), but without blurring the two together at all.

    Key to his position a sort of pragmatism which insists that, on a lot of the issues, different sciences (even if you just mean “hard sciences”) do things in such different ways that one overarching philosophy of science cannot possibly make sense of all of it at once, and that, in fact, different fields may well work in philosophically distinct ways. This breaks up the world of “science”, however broadly or narrowly you define it, in a way much more fundamental than I’d previously been willing to countenance, and I’m not yet fully sure of the implications.

    It also talks a lot about the practicalities of science in the middle, and covers a few key concepts like informal replication, and one-level vs two-level models of consensus shift, and this was particularly of interest to me in a way that, from what I’ve read of SSC so far, would probably be of interest to most people who like reading SSC.

    Informal replication is a particularly intriguing concept – the basics of it seem to be that sometimes startling reults get published and then never either specifically disagreed with by more research or built upon to explore their ramifications, and that this is often because scientists try to replicate the original paper and can’t get anywhere, and simply drop the project and move on, and knowledge of this spreads informally in the community, and so a certain avenue gets marked down as “almost certainly a mistake of some sort and not worth persuing further” without very much, if any, published research actually showing this. This is really important when considering the relative wisdom of systematic meta-analyses vs rough-expert-consensus and is the kind of thing I can imagine reading about on SSC.

    EDIT – meant to link the book if anyone’s interested – audibook or ebook from Google Play here:

    https://goo.gl/TUJLDH

    Print version and audible audiobook on amazon here:

    https://goo.gl/qGiR1i

    • sclmlw says:

      I just put this on my reading list. I’m interested to understand this perspective better, because from my perspective it looks kind of like paradigms. For example, prior to the discovery of the structure of DNA, most scientists thought the genetic material was transmitted through proteins. This made logical sense, since proteins were much more complex, and DNA was too simple to transmit the genetic code.

      Contrary to the standard story I first heard as an undergrad, Avery’s paper didn’t cause an immediate sea change, where everyone accepted that DNA must carry the genetic code. (I have a copy of Avery’s paper, and it’s not that impressive. He never tested the alternate hypothesis, just the isolated fractions where DNA would be found; so if you believed in protein as the genetic material it was easy to dismiss Avery’s paper as sloppy and therefore his findings probably the result of contamination. Testing the supernatant and the precipitate at each step would have been a simple, easy, and obvious way to refute the prevailing view he was arguing against. Since he didn’t do that, it’s hard to fault most contemporary readers for dismissing him.)

      DNA, prior to Watson and Crick, was assumed to be a structural/scaffolding material and was therefore not interesting to most biologists interested in genetic transmission. Watson and Crick, and Linus Pauling, weren’t working at the forefront of a competitive field, since most biologists interested in molecular structure were working in proteins (a field Pauling made huge contributions to himself, before his big flub with DNA). Watson and Crick got lucky when a colleague pointed out to Crick that the bases he was using had an unstable enol structure, instead of converting to the more stable ketone structure. Crick didn’t even know about enol-keto tautaumerization. This detail was also missed by Pauling, which is why he didn’t get it right; he stated in his paper that he shoved all the phosphates in the center because the bases wouldn’t pair, so they had to point outward. So it’s clear Pauling tried to get them to pair – he thought of it at least – but he couldn’t because he was using the wrong subunit structure.

      Had Crick not been shown a corrected structure for the bases, where might we be with DNA today? It wasn’t an important molecule to study, because the paradigm pointed the other direction in regard to what carried the genetic material. That looks similar to the discussion on Kuhn and different paradigm perspectives. It’s not a persistent march of progress, where everyone is headed in generally the right direction, and some significant discoveries are the result of puzzles everyone knew about but hadn’t solved yet.

      Later, what we now refer to as ribosomes were called microsomal particles. It was known that they produced protein, but the theory was that each microsomal particle produced a unique protein. One paper overturned this paradigm and showed that ribosomes were interchangeable, and it was later demonstrated that the RNA microsomal particles was not the RNA that encodes the resultant protein. Again, the kind of question you ask when you assume one ribosome/one protein is entirely different from the paradigm of interchangeable ribosomes.

      Maybe you can maintain the perspective that molecular biology would have gotten there eventually under the baseline paradigm. Perhaps without this one paper you would still eventually correct the notion to interchangeable ribosomes anyway, just by slowly pounding against the fundamental molecular forces involved, solving the smaller problems until you figure it out.

      Having done my share of bench work that’s not a very credible description of some of the leaps in understanding that have taken place in biology. Without guiding paradigms, we can at least surmise we wouldn’t see the kind of progress we do without the research taking a few additional millennia. When those paradigms shift, we see a nice acceleration along the direction of the shift.

      Maybe these aren’t as grand of paradigm shifts as Aristotle/Newton, Newton/Einstein, or other examples from physics. But they appear to reflect many of the features under discussion, just at a smaller scale: most biologists think one way – protein carries the genetic material – but a small group bucks the trend. After working on the problem for years, they accumulate sufficient evidence that most scientists move over to their model and the old model is eventually abandoned. Under the new paradigm, new and interesting questions present themselves.

  6. gray says:

    I just thought of a great exemplar of one of Kuhn’s points which JP stated
    “Now Kuhn has plenty of inventory for talking about how a particular school might be thoroughly useless (i.e., it can be inconsistent and utterly fruitless) , but “empirically false from an objective, out-of-paradigm point of view” is not amongst them.”

    Scott Shtetl once took on Giulio Tononi’s interesting IIT model of consciousness by showing how it seemed to indicate a thermostat would be conscious, albeit at the lowest end of the scale. Surely proof of a problem in the theory, said Scott.

    No, said Giulio in his rebuttal, thermostats are indeed conscious.

    Since intuition and empathetic cues are empirical data for evaluating theories of consciousness, this counts.

    • Lambert says:

      I think you’re getting mixed up.
      ‘Thermostat is a little bit concious’ is a pretty standard bullet to bite.
      The issue with IIT is that it predicts a blu-ray player is orders of magnitude *more* conscious than a human.

      • andrewxstewart says:

        I’m an interested skeptic on the Integrated Information Theory approach.

        Calculating even magnitude estimates for values for the IIT Phi of human brain is of course computationally intractable.

        I had heard the example of an ornate analogue audio amplifier and guitar wah-wah pedal with feedback having what seems like an absurdly high Phi. I had guessed that even this apparently anomalous example would still be way below a human brain, but don’t have a good sense of that.

        I’m sure there are other edge-cases of IIT that disagree with our intuition of what conscious processing might be. Still, IIT might be a fruitful way of attempting to get some grip on investigating consciousness.

        Your blu-ray example is interesting to me!
        Why would a playing a blu-ray disc have a high Phi? I would have interpreted the “information” in nodes is fixed, in the 25 GB on the disc. The information is read from the disc, and decoded though a fixed algorithm, and pixel changes are sent to the screen. Since this is the end of a shallow cause-effect repertoire tree, with no feedback, I would have guessed the Phi would be fairly low.

        https://en.wikipedia.org/wiki/Integrated_information_theory#Postulates:_properties_required_of_the_physical_substrate

        Can you share why you think the blu-ray Phi might be high?

        If I recall correctly, Christof Koch gave an example of a Turing type machine reading petabytes of tape, and writing the same (or trivially modified) to another tape. Despite the petabytes, the IIT Phi here would still be low-to-zero here.

  7. Aging Loser says:

    On the Kingshorsey thing about Newton thinking that space is “absolute and yet non-substantive” —

    Newton calls space “the divine sensorium”, Newton’s disciple Clark (if I remember correctly) calls space the literal omnipresence of God and is accused by Leibniz of thinking of space as the divine body, and both Newton and Clark express an English view of space as divine that seems to begin with Henry More basically declaring that space is God and in support of this claim listing lots of features that space and God have in common, such as infinity and indivisibility. (I learned about all of this from Alexandre Koyre’s book FROM THE CLOSED WORLD TO THE INFINITE UNIVERSE.)

    So it seems to me that Newton does think of space as substantive, although he might distinguish the primary substantiality of space from the secondary substantiality of the bodies into which space sort of congeals here and there. I really like the idea of space as an intelligent (and, I would add, emotional) self-forming entity, because it feels good imagining oneself as being literally within God. (I’ve written a short book that no one will ever read developing a theology along these lines.)

    • Aging Loser says:

      Moreover, a warping rippling “spacetime-fabric” should be called “prime matter”, not “space”, as there is no way to make sense of warps and ripples except in terms of the relationship of the warped rippling stuff to a perfectly smooth background space.

  8. Deiseach says:

    The major advance in meta-science from about 1980 to 2000, imo, was realizing that molecular biology, e.g., works so differently from fundamental physics that trying to subsume both under one theory of science is infeasible.

    I’d be very interested to hear more on this myself.

    The SciDash model shows why we shouldn’t be quite so mocking of geocentrists; the observations are not made from a point standing on the Sun (where you’d get the lovely neat circles – or ellipses – of the orbits) but from Earth, where the observations do show retrograde motions of certain planets. In order to make sense of why a planet seems to be going backwards, you have to include the epicycles. Making the imaginative leap to “if we were standing on the Sun making these observations” would certainly clear up the confusion of the apparent (rather than real) motions, but in that case you could equally say “and if we were standing on the Moon? or Venus? How would it look then?”

    There wasn’t anything concrete to show that the imaginative leap to “observations on the Sun” was correct rather than the practical “we’re standing on the Earth” model until much later (e.g. as mentioned above, better optical instruments to observe the planets and calculate stellar parallax).

    • bullseye says:

      The ancient Greeks did figure out that heliocentric theory could explain retrograde motion more elegantly than geocentric theory, but they rejected heliocentrism because (lacking telescopes) they couldn’t see stellar parallax. Geocentrism predicts no stellar parallax. Heliocentrism predicts no visible stellar parallax only if the distance to the stars is vastly larger than the size of the earth’s orbit, which is true but did not seem reasonable at the time.

      • Joseph Greenwood says:

        +1

        It annoys me when modern people assume that the ancients were stupid or unobservant just because (with hindsight bias) we see they were wrong.

      • Michael Watts says:

        Heliocentrism predicts no visible stellar parallax only if the distance to the stars is vastly larger than the size of the earth’s orbit, which is true but did not seem reasonable at the time.

        This is a weird line to draw, in that Eratosthenes accurately calculated the size of the earth by assuming that the sun was infinitely far away.

        If it works for the sun, why wouldn’t it work for the stars?

        • bullseye says:

          Eratosthenes correctly assumed that the distance between the earth and sun is vastly larger than the size of the earth. But the distance to the stars is vastly larger than the distance to the sun; that’s the part that threw the Greeks off.

  9. masbackward says:

    You really ought to read The Invention of Science, which is really spectacular and is both a spectacular history of how science was invented and direct rebuttal to the concept that scientific ideas’ success is not affected by how well they match objective reality. Among other things he argues convincingly that one of the things that had to happen for science to get going was for people to develop the idea of new discoveries about the world, something that all educated people didn’t think could happen until around the time of the discovery of the new world (which was very important because it conclusively disproved the aristotelian view of the earth as being composed of spheres of earth and water and air). My understanding (from a well-known historian of science I happen to know personally) is that the book is controversial in the field but that the author is incredibly widely read and respected. https://www.amazon.com/Invention-Science-History-Scientific-Revolution/dp/0061759538

  10. Sebastian_H says:

    The point about interpreting older scientific language as non crazy provides an interesting call back to our discussion about C.S. Lewis’s non Christian contributions. One of his things was in noticing that the biggest danger for a modern person reading old literature wasn’t being confused by symbolism that had no current significance (things like Caesar’s ring) but by symbols which have current cultural relevance that is very different. In “The Discarded Image” he talks about a number of cases such as: Summer—in Greek stories summer is the time of death (the heat kills everything), confusing it with our symbolism of idyllic summers can be disconcerting; the space between the stars—the medieval writer assumes that space is well lit, so any language that refers to the space between the stars should call up something like a well lit passageway, not the void we think of now.

    The book is worth reading even if you don’t have intense interest in medieval literature. It really helped me see how confusion about other people’s deeply held terms can cause problems.

    • hls2003 says:

      +1

      An excellent read.

    • Michael Watts says:

      the biggest danger for a modern person reading old literature wasn’t being confused by symbolism that had no current significance (things like Caesar’s ring) but by symbols which have current cultural relevance that is very different.

      This is something I think about in the context of communication between a native and nonnative speaker in whatever language. (Basically the same concept.) There are several things that might happen:

      1. The foreigner has no idea how to express himself and comes out with nonsense. The native fails to understand, the foreigner sees that the native hasn’t understood, and everyone is aware that communication has not taken place.

      2. The foreigner comes out with something that isn’t quite grammatically valid, but strongly suggests a certain meaning. The native correctly takes this as actually meaning the thing it almost definitely means.

      3. The foreigner comes out with something perfectly grammatically valid, and the native understands it.

      4. The foreigner comes out with something perfectly grammatically valid, but is unaware of the meaning of their own speech. The native understands what was said, but doesn’t realize that the intended meaning was something entirely different.

      (There are infinite permutations of this scenario, but these four capture what I want to talk about.)

      People think of “failure” as case 1. But case 1 is a success inasmuch as everyone involved agrees on what happened and how to deal with it, even if dealing with it basically just means ignoring the other person. The real failure is case 4, where both parties believe that communication succeeded despite the fact that they have radically different understandings of what was communicated. This is much more dangerous than the case where nothing was communicated and everyone knows that.

      I see a lot of rhetorical wondering about why people are so fussy about grammar when the intended meaning is so obvious. And I’m pretty certain the answer is that someone who fails to use proper grammar (case 2) is sending a strong signal that case 4 is a real risk when talking to them. If everything someone says is perfectly idiomatic, you can be pretty confident that any given thing they say is intended to mean what it really does mean. If everything someone says is a little bit off, you need to be on your guard every time you talk to them. Of course you can’t be – constant vigilance is impossible to maintain – but one day the failure will occur and you won’t be able to blame them for it.

      Case 3 is also a great example of life being unfair. Coming out with a sentence that’s completely correct is something I want to be proud of. But while speaking like a foreigner who’s pretty good with the language is something natives will compliment you on, saying something perfectly correct never draws comment at all. People just accept correct sentences as the natural order of things.

  11. > The major advance in meta-science from about 1980 to 2000, imo, was realizing that molecular biology, e.g., works so differently from fundamental physics that trying to subsume both under one theory of science is infeasible.

    Several different fields came to this general conclusion during that period. In the philosophy of science, it’s associated with “The Stanford School.” The [Disunity section](https://plato.stanford.edu/entries/scientific-unity/#Disu) of the SEP article on the Unity of Science has a summary. Peter Godfrey-Smith was a member of that “[Disunity Mafia](https://en.wikipedia.org/wiki/Stanford_School)”; the excellent comment here by Ben Woden gives a sense of the School’s program. In addition to Godfrey-Smith’s _Theory and Reality_, a good introduction to the philosophy of science from the Stanford point of view, I would recommend his _Philosophy of Biology_, which goes into more specifics about how different it is in fundamental approach from theoretical physics.

    Here’s a little insight, which I took from Karin Knorr-Cetina’s _Epistemic Cultures: How the Sciences Make Knowledge_. (She is a sociologist of science, not a philosopher, but of the same approximate era and views as the Stanford School. I can’t really recommend the book overall; the ratio of words to insight is high, but there’s some good bits in there.) She spent years at CERN studying the group building the top quark detector, and then again years in a molecular biology lab. A salient contrast is between the way experimental physicists vs biologist deal with anomalous measurements.

    In experimental physics, a measurement that doesn’t make sense means that your instrument is miscalibrated. Basically everything you do is calibrating your instrument, and once it stops giving you anomalous outputs, you measure the thingy (the mass of the quark in this case), and then you are done. In order to have confidence that the machine is calibrated correctly, you usually need to understand *exactly why* you are getting each particular class of anomalies. Then you alter it to eliminate them.

    (For more insight on this, I’d strongly recommend Phillipe Sormani’s _Respecifying Lab Ethnography: An Ethnomethodological Study of Experimental Physics_. He shadowed a graduate student in superconducting scanning tunneling microscopy, who spent years calibrating a balky pile of equipment. Sormani was present at the moment of discovery when the damn thing finally worked, and the few seconds of actual measurement, which constituted the physicist’s PhD thesis, occurred. Lots of fabulous details about how science actually gets done: “by any means necessary.”)

    On the other hand, in molecular biology, if an experiment gives anomalous results, you almost never care why. It just means “something went wrong,” and you do the experiment over differently, almost at random, varying conditions, until it gives an interpretable result. Trying to figure out in detail why it went wrong would be a lot of work and the answer would be something uninteresting and irrelevant. Biological systems are incredibly complicated, and there’s always a million possible explanations for why uninterpretable extra bands appear on your electrophoresis gel. The best bits of Knorr-Cetina’s book (imo) are her examples of biologists coming up with ways of working around junky results.

    This contrast in attitudes toward anomalies is not the central argument for scientific disunity; it’s a motivating example.

    The general observation is that different sciences don’t have much in common. Possibly nothing at all, but certainly not enough to derive good explanations for how and when and why specific sciences work. This does *not* suggest that sciences *don’t* work; many (not all) of them clearly do. It’s just that a unified explanation isn’t feasible. That’s disappointing, because we’d like a proof that Science Is The Way To Truth. Unfortunately, we’re not going to get that, in part because Science isn’t a thing. It’s just smart people trying to figure different sorts of stuff out, _by any means necessary_.

    I said “from about 1980 to 2000” because this point was thoroughly established by the end of that period, I think. So there’s more work being done along these lines, but it’s incremental and not as exciting.

    I think there’s a new revolution in meta-scientific understanding going on now, driven in part by the replication crisis. It can take the disunity thesis as a given and build on that, but its tools and directions are rather different. SUPPOSEDLY I am writing [a book](https://meaningness.com/eggplant) about this (on hold for a year due to unrelated responsibilities).

    • Aging Loser says:

      “The general observation is that different sciences don’t have much in common” — Different sciences share the assumption that stuff is made of various kinds of stuck-together lower-level stuff that behaves in certain ways.

    • Andkat says:

      ‘On the other hand, in molecular biology, if an experiment gives anomalous results, you almost never care why. It just means “something went wrong,” and you do the experiment over differently, almost at random, varying conditions, until it gives an interpretable result. Trying to figure out in detail why it went wrong would be a lot of work and the answer would be something uninteresting and irrelevant. Biological systems are incredibly complicated, and there’s always a million possible explanations for why uninterpretable extra bands appear on your electrophoresis gel. The best bits of Knorr-Cetina’s book (imo) are her examples of biologists coming up with ways of working around junky results.

      I have worked in laboratories where testing, calibrating, and having an explanation for every anomaly on a gel or purification or binding assay was in fact an expectation. Molecular biology is highly feudal and the immediate foundations on which it rests (i.e. extrapolations from chemistry) are pretty underdetermined relative to fundamental physics (which is indeed a pretty central observation as you progress from fundamental physics to many sorts of complex system) so narrowing down the possibility landscape of complicating factors is more difficult and less grounded in widely understood mathematical tools (not because people wouldn’t be applying rigorous QM to stimulate the molecular dance of entire cellular systems, but because such is simply not technologically feasible- not to mention that most contemporary practitioners in molecular biology are not trained in much depth at the level of chemical or more fundamental physics regardless).

      Many laboratories are sloppy and chase results that can be hyped into a publication over solid foundations; many people do not have the time to elucidate what went wrong in every bit of troubleshooting or exploratory work. These concessions to practicality can range from being necessary evils to outright scientific malpractice, but they don’t represent a fundamental ‘paradigm’ of the field and in some cases are simply artifacts of perverse incentives and bad training. It is worth bearing in mind that the reproducibility crisis in biomedicine, a yet further step up on the axes of both complexity and perverse incentives, is as essentially bad as that in psychology.

      Anomalies and ‘throwaway’ results in e.g. biochemistry and molecular biology can often subsequently prove to be quite important in deeper investigation, or at least indicators of somewhat interesting phenomena. The core observation is little more than that scientists are, in fact, humans working in human institutions with limited time and with bills to pay, and varying levels of personal rigor, vigilance, and competency enforced to varying degrees by the culture of training, review, and accountability in a field; I do not think a limited and arbitrary sampling of laboratories between the fields says anything meaningful about the underlying premises of the intellectual paradigm. The belief in what is in principle best and necessary for proper rigor does not necessarily align great with the distribution actual practices in all contexts, and there are some practicing scientists who don’t really have much of a formal sense of the distinction between the two at all because scientific training is for many fields ultimately an ad hoc product of what your graduate (and later) advisor and peers happen to instill in you explicitly or implicitly (and in some cases neither of them give a damn about what you know or don’t provided that things that look plausibly like publicable results are produced).

  12. arbitraryvalue says:

    One brief illustration of specialization’s effect may give this whole series of points additional force. An investigator who hoped to learn something about what scientists took the atomic theory to be asked a distinguished physicist and an eminent chemist whether a single atom of helium was or was not a molecule. Both answered without hesitation, but their answers were not the same. For the chemist the atom of helium was a molecule because it behaved like one with respect to the kinetic theory of gases. For the physicist, on the other hand, the helium atom was not a molecule because it displayed no molecular spectrum. Presumably both men were talking of the same particle, but they were viewing it through their own research training and practice. Their experience in problem-solving told them what a molecule must be. Undoubtedly their experiences had had much in common, but they did not, in this case, tell the two specialists the same thing. As we proceed we shall discover how consequential paradigm differences of this sort can occasionally be.

    This is like asking two people if Pluto is a planet and then when they give different answers, claiming that they have some sort of important disagreement about the nature of Pluto. I think paradigm differences are a real thing but the story here is not an example of one.

    If we accept both that Einstein’s cosmology is better and that Newton’s math is still pretty good (rather than junk science), we are left with an uncomfortable conclusion. Newton’s degree of success at modeling phenomena in motion did not correlate strongly with his degree of success at explaining the structures or characteristics of reality responsible for that phenomena.

    Newton might have thought that he had the equations that explained motion when he actually turned out to have the equations that the real equations of motion reduced to when terms that are undetectably small in his context were neglected, but from my point of view a theory is the claim that “whatever reality is, it behaves like this” and in that sense Newton’s theory isn’t wrong.

  13. Bugmaster says:

    When Einstein comes along, he overturns everything Newton thought about the nature of the universe. Space and time are no longer to be regarded as merely formal properties “within” which things move. Time is relative, space and time are intertwined, and space-time is the very “thing” of which gravity consists … Newton’s degree of success at modeling phenomena in motion did not correlate strongly with his degree of success at explaining the structures or characteristics of reality responsible for that phenomena.

    I honestly do not understand why this is such a big deal.

    If you asked me to build a trebuchet to lob rocks over the enemy fortifications, I’d start by assuming that the Earth was flat. This is a woefully inadequate model of reality; it doesn’t even come close to explaining most of our perceptions about the world… but it works for the trebuchet. I don’t need it to throw rocks over the horizon or into orbit, I just need them to go about 300 meters and over the walls.

    Now, if I look up into the night sky, my flat earth model is no longer sufficient. Does this mean that everything I’ve calculated up to that point is totally wrong ? No, because I’m standing in the middle of my castle, which used to be the enemy’s castle. If I come up with the Heliocentric cosmology in order to explain the sky, does this mean that my new model is going to be 100% perfect and immutable ? No, of course not; it’s likely that I’m totally wrong about the fine details. Einstein will eventually come along and solve those for me, but I wouldn’t care unless I happened to be looking at Mercury (or gravitational lensing etc.)

    And yes, Einstein is probably wrong, as well. Why is this obvious fact treated as some kind of a grand revelation ? What’s wrong with just building iteratively better models of reality, and re-using the old ones when they make the calculations easier ? I honestly don’t see the philosophical significance of this.

    • gettin_schwifty says:

      The point is that we can’t prove “Einstein is right about space-time and all that” from the correctness of his equations. Newton’s equations were very good at predicting (everyday life) motion, and we “know” he was wrong about the fundamentals, so we can’t know that Einstein isn’t equally (or more, or less) mistaken about the nature of things despite his improved accuracy in prediction.

      People like to say science tends towards truth. The Aristotle/Newton/Einstein example shows that it tends towards increased predictive accuracy (puzzles solved?), where the philosophical underpinnings are somewhat of a random walk. (This only applies to physics, but other sciences look even worse)

      I really recommend the Samzdat posts about Kuhn, I think everything I have to say right now is better said there

      • Bugmaster says:

        so we can’t know that Einstein isn’t equally (or more, or less) mistaken about the nature of things … People like to say science tends towards truth. The Aristotle/Newton/Einstein example shows that it tends towards increased predictive accuracy (puzzles solved?)

        I honestly can’t tell the difference between “truth” and “increased predictive accuracy”.

        To put it another way, why are our models predicting reality with increased accuracy ? It could be because we are getting closer and closer to modeling the true nature of reality — with the understanding that 100% accuracy is unachievable (plus or minus the Simulation Hypothesis). Or, it could be that our models have nothing whatsoever to do with external reality, and we’ve just been repeatedly lucky for the past 1000 years, but how likely is that ?

        • Joseph Greenwood says:

          If you believe reality is “math all the way down”, then I guess there is no difference between predictive accuracy and truth. But most theories (even in math) have a sort of tao—that is, a paradigm that lies behind them. Einstein and Newton both derived their formulas from particular geometric intuitions, and Newton’s geometric intuition was wrong. Metaphysical underpinnings matter.

          • Bugmaster says:

            Once again, I don’t see the difference between “geometric intuition” and “a model of reality based on math” (as long as you count geometry as being part of math, which I do). I understand that you could say, “deep down, reality is made up of particles”; then someone else comes along and says, “actually, it’s quantum probability packets”. Yes, these are two different models, and one of them is more accurate; however, I don’t think this fact is particularly earth-shattering. Models improve as we gather more data. Unless, perhaps, the second guy is claiming “yes, it’s quantum probability and that’s it, I’ve now devised the 100% perfectly accurate model of reality which can never be improved upon”. In that case, yes, such a statement would be earth-shattering… if it were true.

    • Deiseach says:

      I don’t need it to throw rocks over the horizon or into orbit, I just need them to go about 300 meters and over the walls.

      You don’t need a theory of gravity either, you just need to remember that your projectiles (the big rocks, in this case) will fall towards the earth over distance and not go in a straight line, so that you calculate the angle at which to set your trebuchet in order that you will not under- or over-shoot the distance.

      From the little I can quickly find online, aiming a trebuchet seems to be a kind of rule-of-thumb/seat of the pants affair that you learn by experience; real ballistics and gun laying came later (but still before Newton). You don’t need to be highly educated or even very much educated at all, but a knack for mathematics really is helpful or even vital to be able to grasp the principles involved when working out angles of elevation etc.

      • Michael Watts says:

        You don’t need a theory of gravity either, you just need to remember that your projectiles (the big rocks, in this case) will fall towards the earth over distance

        I would be happy to describe “big rocks fall towards the earth” as a theory of gravity, even if it’s less sophisticated than other theories of gravity.

      • Bugmaster says:

        Yes, I agree, historically trebuchets were more art than science; I just used them as a convenient illustration. I was thinking of how a modern person might approach building one, not how medieval people were actually building them, historically speaking.

        • If you are designing a trebuchet from scratch, the critical point isn’t the shape of the Earth. It’s a good enough feel for (Newtonian) mechanics to realize that the maximum range, velocity held constant, is at an angle of 45°, but the maximum velocity is with the arm vertical (because that’s the minimum potential energy of the counterweight), so you need a sling designed to let you have both at the same time.

          • Bugmaster says:

            Right, but like I said, you don’t even need the full Newtonian mechanics; you just need to know the acceleration due to gravity at the surface of the Earth (which you assume to be flat, implicitly). You don’t even need to use calculus (explicitly, that is).

  14. Bugmaster says:

    progressives needed some argument for why scientific institutions were right and Christian institutions were wrong (the real answer, unironically: our people are smarter and less biased).

    Well, that, and also the scientific process is much better than the Christian one. The main feature of the scientific method is self-correction by verification against an external, objective source of truth (however vague or unreliable that source may be). It’s missing from Christianity (and religious faith in general), which is why you get all these weird heresies all over the place that keep replicating without rhyme, reason, or resolution.

    • Jaskologist says:

      Heresies do rhyme. Gnosticism and Arianism are constant temptations that regularly pop up, die out, and then pop up again. They wear slightly different costumes each time, but when you study enough history you come to recognize that it’s the same guy underneath those glasses.

      • Aging Loser says:

        It’s tempting to make sense. Those are the main two ways to make sense, given certain assumptions. The Orthodox route consists of embracing both contradictory statements whenever there’s a contradiction (Christology, Trinity, Inherited Original Guilt, Superloving Sadistic Psychopathic God).

        • Jaskologist says:

          You can tell that Christianity is false because heresies keep popping up without rhyme or reason. Also, you can tell that Christianity is false because heresies pop up with regularity and great resemblance to each other.

          As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon my mind — the impression that Christianity must be a most extraordinary thing. For not only (as I understood) had Christianity the most flaming vices, but it had apparently a mystical talent for combining vices which seemed inconsistent with each other. It was attacked on all sides and for all contradictory reasons. No sooner had one rationalist demonstrated that it was too far to the east than another demonstrated with equal clearness that it was much too far to the west. No sooner had my indignation died down at its angular and aggressive squareness than I was called up again to notice and condemn its enervating and sensual roundness.

          GK Chesterton, Orthodoxy

          • g says:

            If you have a body of doctrine that is internally incoherent, and tell a lot of people that they are supposed to believe it, then you should expect (1) that lots of people will decline to do so and (2) that many of them will do so in similar ways (attempting to change as little as possible while restoring coherence).

            The self-contradiction you profess to find in criticisms of Christianity is (in this instance) largely imagined. In particular, when Bugmaster said “without rhyme, reason, or resolution”, I don’t think he intended to assert that there are no patterns at all in which heresies crop up. (If he did, then he was wrong, and I expect that on reflection he will agree that he was wrong.)

            The fact that heresies keep popping up is evidence that Christianity is wrong, but not terribly strong evidence. What it is quite strong evidence for is the point Bugmaster was actually making, namely that it is not very well anchored to reality: if two variants of Christianity arise, the means for determining which is more correct are rather poor. Of course this state of affairs is quite compatible with Christianity being right: it’s what would happen, e.g., if a benevolent god handed out some revelations, accurate but incomplete and liable to misinterpretation, and then withdrew and declined to offer any clarifications or corrections no matter what the human race did with those revelations.

    • Deiseach says:

      the real answer, unironically: our people are smarter and less biased

      Yeah, that Newton, what a dum-dum! Luckily there was a smart atheist out there who made all those discoveries in maths and physics instead 🙂

      I don’t think anybody is going to object to “Newton was a great mathematician and a good scientist”. He was also not alone a Christian, but even by orthodox Christian standards out of his tree. (No, Isaac, The Bible Code is not the answer. Though if anyone is curious, he says the world won’t end any earlier than 2060 so believe no ‘prophets’ claiming it will stop before then).

      Sometimes you get both, you know? Imma quote St Augustine of Hippo here, talking about his disillusionment with the Manichaeans, influenced by hearing Mani lecturing about scientific topics of the time which he got obviously wrong:

      Chapter 5. Of Manichæus Pertinaciously Teaching False Doctrines, and Proudly Arrogating to Himself the Holy Spirit.

      8. But yet who was it that ordered Manichæus to write on these things likewise, skill in which was not necessary to piety? For You have told man to behold piety and wisdom, of which he might be in ignorance although having a complete knowledge of these other things; but since, knowing not these things, he yet most impudently dared to teach them, it is clear that he had no acquaintance with piety. For even when we have a knowledge of these worldly matters, it is folly to make a profession of them; but confession to You is piety. It was therefore with this view that this straying one spoke much of these matters, that, standing convicted by those who had in truth learned them, the understanding that he really had in those more difficult things might be made plain. For he wished not to be lightly esteemed, but went about trying to persuade men that the Holy Ghost, the Comforter and Enricher of Your faithful ones, was with full authority personally resident in him. When, therefore, it was discovered that his teaching concerning the heavens and stars, and the motions of sun and moon, was false, though these things do not relate to the doctrine of religion, yet his sacrilegious arrogance would become sufficiently evident, seeing that not only did he affirm things of which he knew nothing, but also perverted them, and with such egregious vanity of pride as to seek to attribute them to himself as to a divine being.

      9. For when I hear a Christian brother ignorant of these things, or in error concerning them, I can bear with patience to see that man hold to his opinions; nor can I apprehend that any want of knowledge as to the situation or nature of this material creation can be injurious to him, so long as he does not entertain belief in anything unworthy of You, O Lord, the Creator of all. But if he conceives it to pertain to the form of the doctrine of piety, and presumes to affirm with great obstinacy that whereof he is ignorant, therein lies the injury. And yet even a weakness such as this in the dawn of faith is borne by our Mother Charity, till the new man may grow up unto a perfect man, and not be “carried about with every wind of doctrine”. (Ephesians 4:13-14) But in him who thus presumed to be at once the teacher, author, head, and leader of all whom he could induce to believe this, so that all who followed him believed that they were following not a simple man only, but Your Holy Spirit, who would not judge that such great insanity, when once it stood convicted of false teaching, should be abhorred and utterly cast off? But I had not yet clearly ascertained whether the changes of longer and shorter days and nights, and day and night itself, with the eclipses of the greater lights, and whatever of the like kind I had read in other books, could be expounded consistently with his words. Should I have found myself able to do so, there would still have remained a doubt in my mind whether it were so or no, although I might, on the strength of his reputed godliness, rest my faith on his authority.

      • Bugmaster says:

        Like I said above, it’s the process you follow that gives you predictive power, not just raw intelligence. Newton utterly failed in his alchemical endeavours, for this very reason; and he succeeded at his scientific research for that same reason.

        That said though, modern scientists are more likely to be non-religious than non-scientists; and of those scientists who are religious, most are of the “there might be a God be he/she/it is this complex metaphysical quasi-entity” flavor, not of the “the Earth is 6000 years old and you will BURN in HELL for eating SHRIMP !!11!” flavor. There could be multiple reasons for this, but one of the reasons, IMO, is religious faith. Science is all about answering questions; but faith tends to provide ready-made answers while at the same time forbidding inquiry into certain topics. This is why, IIRC, Newton (or maybe it was someone else, I forget) did not bother investigating how the Sun, the Earth, and all the other planets came to be — because “God ordained it so” was enough of an explanation.

        • Molehill says:

          But did he follow a different process for alchemy? If we had asked him, would he agree that he was following a different process? We recognize the process as different, but is that just because his foundational concepts were so far from modeling reality that his experiments shed no light on the relevant laws of nature?

          • Bugmaster says:

            As far as I understand, yes. He started out by assuming that the Philosopher’s Stone was somehow possible (despite zero evidence to the contrary), and set out to discover/create one. By contrast, he also noticed (and/or read) that the celestial and earthly bodies moved in a certain way, and tried to figure out why that was. These two processes are almost exact opposites of each other. The fact that his alchemical studies “shed no light on the relevant laws of nature” is not a bug — instead, it is a failure mode of the faith-based process.

            And on a sidenote: yes, I understand that technically the very notions of “process”, “evidence”, and “difference” could be called “foundational concepts”; but if you follow down that road, you end up in a place where nothing is real, people argue over what the meaning of “is” is, and nothing useful generally gets done.

        • fluffykitten55 says:

          He was clearly bipolar, which likely created problems of delusions/insufficient skepticism, alongside boosting creativity and confidence.

          In a world where not much is known, ‘trying out lots of crazy ideas’ is probably not far from optimal, because you cannot so readily combine skepticism and prior knowledge to stop you from going down some dead end.

  15. P. George Stewart says:

    On of your commentators said: “Kuhn’s view also answers the question of why falsification has always been popular among scientists on the ground. When a field is performing “normal science” under a particular paradigm, the acceptance of particular facts or pieces of theory largely does resemble falsification: either the new proposal fits the evidence under the paradigm, or it does not. Kuhn (and Feyerabend) show how this simplistic model falls apart when comparing between paradigms, because there is no way to agree upon what constitutes falsification.”

    This doesn’t make any sense, falsification is precisely what you do when you don’t know, it’s how you whittle away at bold hypotheses that go beyond present evidence.

    Step 1) Imagine a possible way things could be (usually, one that goes beyond present evidence and is boldly different from other current theories)

    Step 2) Deduce consequences for experience and experiment that would necessarily obtain if things are that way.

    Step 3) If you don’t get those consequences, then you know for certain that things can’t possibly be that way.

    • jadagul says:

      But that’s the point—all of that action happens within a paradigm.

      Falsification is great for rejecting a _hypothesis_ given a _paradigm_. So it works perfectly for normal science. But no single observation can reject a paradigm, for two reasons:

      1. Paradigms have free variables. Sabine Hossenfelder’s _Lost in Math_ has a lot of great examples of SUSY theorists saying “Yes, SUSY predicts we’ll see this if we run an experiment at this energy”. And then we don’t see it, and theorists say “oh, we must have had some of the parameters wrong. SUSY predicts we’ll see this if we run stuff at a higher energy level”. This is because SUSY tells us what hypotheses should look like, but isn’t limited to one specific hypothesis.

      So you can’t really falsify a paradigm, because you can change the details around without rejecting the paradigm wholesale.

      2. In any paradigm, some experiments will have results that are just weird. And some experiments are just wrong. If we get an experiment that shows neutrinos travelling faster than light, our default assumption is that the experiment screwed up. And even if no one can figure out how it was screwed up, it’s pretty reasonable to assume the experiment was an error until you see it happen a lot more times.

      And speed-of-light is one of the clearest up-and-down claims of the GR paradigm. A lot of other things are fuzzier, and a lot of ideas can be explained in either paradigm.

      Remember, when the Copernican model was advanced, it actually made predictions _less well_ than the previous Ptolemaic model. It just made them simpler and cleaner. (And, of course, under the principle of relativity, we _can_ construct a system with the earth at the center, and it’s physically equivalent. It’s just way messier to do it that way).

      • wollywoo says:

        So a paradigm is just a theory with lots of unspecified parameters that you can tune to fit different results? That seems… uncontroversial? I don’t know how you get from there to any surprising claims about the nature of scientific truth. Theories generally have limits to what data can be explained by this parameter tuning. If they don’t, then they are unfalsifiable, which is a genuine strike against them.

        If the Kuhns and the Feyerabends of the world are just saying “Look now, her impossibly complex M-theory and his impossibly complex quantum gravity can both be made to explain the same data, so you can’t possibly decide between them based on evidence” then… everybody already agrees. Some theories are still in the stage where they cannot make strong predictions, and, who knows, maybe they always will be. But that doesn’t change the fact that there are other theories, like Einstein’s, that are better – they make definite predictions. Newton and Einstein might disagree vociferously, but if you lock them in a room with lots of clocks and fast-moving particles and such and let them make bets on what will happen, eventually Newton will either cede defeat or lose all his money.

        *please note that I haven’t actually read Kuhn or Feyerabend, and I’m not a real scientist.

        • P. George Stewart says:

          No, paradigms are connected to “normal science.” Kuhn’s theory is really sociological, not epistemological, or if it is, it’s only indirectly epistemological; it’s about the social processes around science. “Paradigm” is a sociological category, not an epistemological one.

          Falsificationism is purely epistemological, and it doesn’t require anything to be a statistically common way of looking at things and doing things – that’s precisely the point, it can propose things that aren’t within the bounds of the normal way of looking at things.

          So long as there’s a chain of logic that connects the proposed way-things-could-be (behind the curtain, so to speak) to consequences for experience (on stage), the theory can be decisively rejected (via modus tollens) if the consequences don’t obtain.

          The key to understanding this is, I think, that falsificationism (and more broadly, critical rationalism) is not in the business of justification (hence the objections from Goodman, Feyerabend, etc., miss the point). We’re not looking for reasons to believe the theory, we’re looking for reasons to reject false theories.

          Whatever’s left standing after criticism is not justified, it’s just a possible truth candidate (whereas the ones that were rejected are definitely false, and therefore cannot possibly be standing truth candidates any longer).

      • Andkat says:

        It is worth noting that the saying that a theoretical framework may be tuned to yield an ensemble of independent models with different experimental predictions is not equivalent to saying that “the paradigm can be tuned infinitely to explain any possible result”; the former can still be in principal falsified by falsifying all of the different models it can produce. It is very rare that you will ever be in a conceptual space where your models for a phenomenon do not encompass a range of different experimental thresholds depending on what specific possible assumptions and convoluting factors, some of which one might casually surmise to be unlikely, go into each permutation; a model that can in principle kick the can down the road literally forever is something else entirely (and essentially unscientific). This holds I would argue on essentially every level of scientific endeavor, from day-to-day practice with very specific models for your results of the day to the overarching structure of paradigms; to my understanding the mainstream switch away from geocentric models came largely when heliocentric models became predictively more useful for practical astronomy. If you could keep adding epicycles in some predictable or bounded way to self-consistently explain all of modern cosmological observation with equivalent precision in a geocentric framework then it would not be fundamentally wrong and we would have to settle on being indecisive about the matter until future observations incompatible with all the ways the theory can be extended are made.

  16. wollywoo says:

    The fact that early-twentieth-century scientists did not always agree on the definition of a molecule does not seem surprising or interesting in the least, any more than the fact that Brits and Americans do not always agree on the definition of ‘pants’, or the pronunciation of ‘tomato’. Similarly, the fact that Copernicus re-defined ‘planet’ is not important. Indeed, the Copernican model can be communicated more effectively by playing with a basketball and a tennis ball than by using words at all. There may be times when scientific theories cannot be communicated across a barrier of built-up concepts and the accompanying terminology, but these are not examples.

  17. albatross11 says:

    How much of the difference between Newton’s and Einstein’s models of the universe came down to differences in available data on which to build a model? I mean, there are a couple centuries of experiments and observations and advances in physics separating them.

    It seems like Newton was fortunate enough[1] to be able to build a model based on what he could observe, that worked really well for explaining/predicting most of the experiments and observations people could make over the next couple centuries. But eventually, you get stuff Newton never imagined like the speed of light as a measurable thing and a fundamental speed limit of the universe, and you need a better theory to make sense of it.

    [1] He was lucky in the sense that the model he could reach based on his own knowledge ended up consistent with so much stuff he couldn’t know yet.

  18. albatross11 says:

    The skyscraper description is nice, but I’d add a couple things that seem important:

    a. A lot of the work done on the first skyscraper is likely to be portable to the second. For example, tons of observations of the natural world done in a pre-Darwinian understanding of biology could be plugged into a Darwinian framework pretty well.

    b. Sometimes, the new skyscraper is built on the same foundation as the old one, but with different materials. Think of the modern version of the theory of evolution–it takes Darwin’s foundation and adds knowledge of how genes work and some better mathematical tools to get a much higher skyscraper.

  19. DaveK says:

    To try and answer the car question- this is something I think about a lot in terms of consciousness studies. When one thinks about consciousness, one has to be very careful with thinking about what it actually is versus how its modeled, which is on the edge of human cognitive abilities.

    Remember the old zen koan about whether a tree falling in a forest makes a sound?

    Imagine Earth before there was any intelligent life. You probably have a mental picture of a planet that looks very different. But it’s still imagined as if you were there, sensing it with your current sensory apparatus. Back then there was no sensory apparatus to give it that form. In fact, the sensory apparatus emerged from said conditions. So your thought of what a “tree” is and what a “sound” is- they are represented by certain sensory processes in your brain that you have labeled. Now, there’s something there. But it didn’t “look” like what you’re thinking, because there was no one to “see it”. Furthermore, the “past” is an abstraction used to make sense of things. People often think of it as a point one could go back to. But thinking carefully, there is no “past” or “future”, these are abstractions that let us make useful models of the world. There is only a constant “moving forward” or the “mind nature” of eastern philosophical systems.

    Your car will be there when you go out to see it. But on the one hand, it doesn’t “exist” as a “car” in the same sense that the primitive earth didn’t “exist.” Secondly, if your mind had evolved differently, like say more along the lines of a dolphin where sound is used to sense the shape of things and distances, you would have manipulated that “something” into a different form AND the “mind-object” of your “car” would look totally different, in an alien way.

    This may sound like metaphysical mumbo jumbo, but it’s critical to understanding the unification of Popper and Kuhn. Yes, science is grounded in a different way then other means of understanding in that it changes human comprehension in a different way then “incorrect” beliefs will which won’t provide this ladder that will let you get to more elaborate enthalpic structures like cars and computers and understandings of one’s own nature. At the same time, those structures aren’t the Territory, and the map itself doesn’t just get updated- the map evolves and causes it’s own changes and changes what a map is, so it goes (in this analogy) to an outline of your town on paper, to a realtime 3d map of the world on your computer, to a someting stored in a holodeck that you can visit any location, to something that interacts and alters your consciousness in a way that is currently beyond your own comprehension.

    But if your maps don’t work, if they don’t improve upon the information content and it’s ability to let you model your world an change it, the tower doesn’t climb.