When I hear scientists talk about Thomas Kuhn, he sounds very reasonable. Scientists have theories that guide their work. Sometimes they run into things their theories can’t explain. Then some genius develops a new theory, and scientists are guided by that one. So the cycle repeats, knowledge gained with every step.
When I hear philosophers talk about Thomas Kuhn, he sounds like a madman. There is no such thing as ground-level truth! Only theory! No objective sense-data! Only theory! No basis for accepting or rejecting any theory over any other! Only theory! No scientists! Only theories, wearing lab coats and fake beards, hoping nobody will notice the charade!
I decided to read Kuhn’s The Structure Of Scientific Revolutions in order to understand this better. Having finished, I have come to a conclusion: yup, I can see why this book causes so much confusion.
At first Kuhn’s thesis appears simple, maybe even obvious. I found myself worrying at times that he was knocking down a straw man, although of course we have to read the history of philosophy backwards and remember that Kuhn may already be in the water supply, so to speak. He argues against a simplistic view of science in which it is merely the gradual accumulation of facts. So Aristotle discovered a few true facts, Galileo added a few more on, then Newton discovered a few more, and now we have very many facts indeed.
In this model, good science cannot disagree with other good science. You’re either wrong – as various pseudoscientists and failed scientists have been throughout history, positing false ideas like “the brain is only there to cool the blood” or “the sun orbits the earth”. Or you’re right, your ideas are enshrined in the Sacristry Of Settled Science, and your facts join the accumulated store that passes through the ages.
Simple-version-of-Kuhn says this isn’t true. Science isn’t just facts. It’s paradigms – whole ways of looking at the world. Without a paradigm, scientists wouldn’t know what facts to gather, how to collect them, or what to do with them once they had them. With a paradigm, scientists gather and process facts in the ways the paradigm suggests (“normal science”). Eventually, this process runs into a hitch – apparent contradictions, or things that don’t quite fit predictions, or just a giant ugly mess of epicycles. Some genius develops a new paradigm (“paradigm shift” or “scientific revolution”). Then the process begins again. Facts can be accumulated within a paradigm. And many of the facts accumulated in one paradigm can survive, with only slight translation effort, into a new paradigm. But scientific progress is the story of one relatively-successful and genuinely-scientific effort giving way to a different and contradictory relatively-successful and genuinely-scientific effort. It’s the story of scientists constantly tossing out one another’s work and beginning anew.
This gets awkward because paradigms look a lot like facts. The atomic theory – the current paradigm in a lot of chemistry – looks a lot like the fact “everything is made of atoms and molecules”. But this is only the iceberg’s tip. Once you have atomic theory, chemistry starts looking a lot different. Your first question when confronted with an unknown chemical is “what is the molecular structure?” and you have pretty good ideas for how to figure this out. You are not particularly interested in the surface appearance of chemicals, since you know that iron and silver can look alike but are totally different elements; you may be much more interested in the weight ratio at which two chemicals react (which might seem to the uninitiated like a pretty random and silly thing to care about). If confronted with a gas, you might ask things like “which gas is it?” as opposed to thinking all gases are the same thing, or wondering what it would even mean for two gases to be different. You can even think things like “this is a mixture of two different types of gas” without agonizing about how a perfectly uniform substance can be a mixture of anything. If someone asks you “How noble and close to God would say this chemical sample is?” you can tell them that this is not really a legitimate chemical question, unless you mean “noble” in the sense of the noble gases. If someone tells you a certain chemical is toxic because toxicity is a fundamental property of its essence, you can tell them that no, it probably has to do with some reaction it causes or fails to cause with chemicals in the body. And if someone tells you that a certain chemical has changed into a different chemical because it got colder, you can tell them that cold might have done something to it, it might even have caused it to react with the air or something, but chemicals don’t change into other chemicals in a fundamental way just because of the temperature. None of these things are obvious. All of them are hard-won discoveries.
A field without paradigms looks like the STEM supremacist’s stereotype of philosophy. There are all kinds of different schools – Kantians, Aristotelians, Lockeans – who all disagree with each other. There may be progress within a school – some Aristotelian may come up with a really cool new Aristotelian way to look at bioethics, and all the other Aristotelians may agree that it’s great – but the field as a whole does not progress. People will talk past one another; the Aristotelian can go on all day about the telos of the embryo, but the utilitarian is just going to ask what the hell a telos is, why anyone would think embryos have one, and how many utils the embryo is bringing people. “Debates” between the Aristotelian and the utilitarian may not be literally impossible, but they are going to have to go all the way to first principles, in a way that never works. Kuhn interestingly dismisses these areas as “the fields where people write books” – if you want to say anything, you might as well address it to a popular audience for all the good other people’s pre-existing knowledge will do you, and you may have to spend hundreds of pages explaining your entire system from the ground up. He throws all the social sciences in this bin – you may read Freud, Skinner, and Beck instead of Aristotle, Locke, and Kant, but it’s the same situation.
A real science is one where everyone agrees on a single paradigm. Newtonianism and Einsteinianism are the same kind of things as Aristotelianism and utilitarianism; but in 1850, everybody believed the former, and in 1950, the latter.
I got confused by this – is Aristotelian philosophy a science? Would it be one if the Aristotelians forced every non-Aristotelian philosopher out of the academy, so that 100% of philosophers fell in line behind Aristotle? I think Kuhn’s answer to this is that it’s telling that Aristotelians haven’t been able to do this (at least not lately); either Aristotle’s theories are too weak, or philosophy too intractable. But all physicists unite behind Einstein in a way that all philosophers cannot behind Aristotle. Because of this, all physicists mean more or less the same thing when they talk about “space” and “time”, and they can work together on explaining these concepts without constantly arguing to each other about what they mean or whether they’re the right way to think about things at all (and a Newtonian and Einsteinian would not be able to do this with each other, any more than an Aristotelian and utilitarian).
So how does science settle on a single paradigm when other fields can’t? Is this the part where we admit it’s because science has objective truth so you can just settle questions with experiments?
This is very much not that part. Kuhn doesn’t think it’s anywhere near that simple, for a few reasons.
First, there is rarely a single experiment that one paradigm fails and another passes. Rather, there are dozens of experiments. One paradigm does better on some, the other paradigm does better on others, and everyone argues over which ones should or shouldn’t count.
For example, one might try to test the Copernican vs. Ptolemaic worldviews by observing the parallax of the fixed stars over the course of a year. Copernicus predicts it should be visible; Ptolemy predicts it shouldn’t be. It isn’t, which means either the Earth is fixed and unmoving, or the stars are unutterably unimaginably immensely impossibly far away. Nobody expected the stars to be that far away, so advantage Ptolemy. Meanwhile, the Copernicans posit far-off stars in order to save their paradigm. What looked like a test to select one paradigm or the other has turned into a wedge pushing the two paradigms even further apart.
What looks like a decisive victory to one side may look like random noise to another. Did you know weird technologically advanced artifacts are sometimes found encased in rocks that our current understanding of geology says are millions of years old? Creationists have no trouble explaining those – the rocks are much younger, and the artifacts were probably planted by nephilim. Evolutionists have no idea how to explain those, and default to things like “the artifacts are hoaxes” or “the miners were really careless and a screw slipped from their pocket into the rock vein while they were mining”. I’m an evolutionist and I agree the artifacts are probably hoaxes or mistakes, even when there is no particular evidence that they are. Meanwhile, probably creationists say that some fossil or other incompatible with creationism is a hoax or a mistake. But that means the “find something predicted by one paradigm but not the other, and then the failed theory comes crashing down” oversimplification doesn’t work. Find something predicted by one paradigm but not the other, and often the proponents of the disadvantaged paradigm can – and should – just shrug and say “whatever”.
In 1870, flat-earther Samuel Rowbotham performed a series of experiments to show the Earth could not be a globe. In the most famous, he placed several flags miles apart along a perfectly straight canal. Then he looked through a telescope and was able to see all of them in a row, even though the furthest should have been hidden by the Earth’s curvature. Having done so, he concluded the Earth was flat, and the spherical-earth paradigm debunked. Alfred Wallace (more famous for pre-empting Darwin on evolution) took up the challenge, and showed that the bending of light rays by atmospheric refraction explained Rowbotham’s result. It turns out that light rays curve downward at a rate equal to the curvature of the Earth’s surface! Luckily for Wallace, refraction was already a known phenomenon; if not, it would have been the same kind of wedge-between-paradigms as the Copernicans having to change the distance to the fixed stars.
It is all nice and well to say “Sure, it looks like your paradigm is right, but once we adjust for this new idea about the distance to the stars / the refraction of light, the evidence actually supports my paradigm”. But the supporters of old paradigms can do that too! The Ptolemaics are rightly mocked for adding epicycle after epicycle until their system gave the right result. But to a hostile observer, positing refraction effects that exactly counterbalance the curvature of the Earth sure looks like adding epicycles. At some point a new paradigm will win out, and its “epicycles” will look like perfectly reasonable adjustments for reality’s surprising amount of detail. And the old paradigm will lose, and its “epicycles” will look like obvious kludges to cover up that it never really worked. Before that happens…well, good luck.
Second, two paradigms may not even address or care about the same questions.
Let’s go back to utilitarianism vs. Aristotelianism. Many people associate utilitarianism with the trolley problem, which is indeed a good way to think about some of the issues involved. It might be tempting for a utilitarian to think of Aristotelian ethics as having some different answer to the trolley problem. Maybe it does, I don’t know. But Aristotle doesn’t talk about how he would solve whatever the 4th-century BC equivalent of the trolley problem was. He talks more about “what is the true meaning of justice?” and stuff like that. While you can twist Aristotle into having an opinion on trolleys, he’s not really optimizing for that. And while you can make utilitarianism have some idea what the true meaning of justice is, it’s not really optimized for that either.
An Aristotelian can say their paradigm is best, because it does a great job explicating all the little types and subtypes of justice. A utilitarian can say their paradigm is best, because it does a great job telling you how to act in various contrived moral dilemmas.
It’s actually even worse than this. The closest thing I can think of to an ancient Greek moral dilemma is the story of Antigone. Antigone’s uncle declares that her traitorous dead brother may not be buried with the proper rites. Antigone is torn between her duty to obey her uncle, and her desire to honor her dead brother. Utilitarianism is…not really designed for this sort of moral dilemma. Is ignoring her family squabbles and trying to cure typhus an option? No?
But then utilitarianism’s problems are deeper than just “comes to a different conclusion than ancient Greek morals would have”. The utilitarian’s job isn’t to change the ancient Greek’s mind about the answer to a certain problem. It’s to convince him to stop caring about basically all the problems he cares about, and care about different problems instead.
Third, two paradigms may disagree on what kind of answers are allowed, or what counts as solving a problem.
Kuhn talks about the 17th century “dormitive potency” discourse. Aristotle tended to explain phenomena by appealing to essences; trees grew because it was “in their nature” to grow. Descartes gets a bad rap for inventing dualism, but this is undeserved – what he was really doing was inventing the concept of “matter” as we understand it, a what-you-see-is-what-you-get kind of stuff with no hidden essences, which responds mechanically to forces (and once you have this idea, you naturally need some other kind of stuff to be the mind). With Cartesian matter firmly in place, everyone made fun of Aristotle for thinking he had “solved” the “why do trees grow?” question by answering “because it is in their nature”, and this climaxed with the playwright Moliere portraying a buffoonish doctor who claimed to have discovered how opium put people to sleep – it was because it had a dormitive potency!
In Aristotle’s view of matter, saying “because it’s their essence” successfully answers questions like “why do trees grow?”. The Cartesian paradigm forbade this kind of answer, and so many previously “solved” problems like why trees grow became mysterious again – a step backwards, sort of. For Descartes, you were only allowed to answer questions if you could explain how purely-mechanical matter smashing against other purely-mechanical matter in a billiard-ball-like way could produce an effect; a more virtuous and Descartes-aware doctor explained opium’s properties by saying opium corpuscles must have a sandpaper-like shape that smooths the neurons!
Then Newton discovered gravity and caused an uproar. Gravity posits no corpuscles jostling other corpuscles. It sounds almost Aristotelian: “It is the nature of matter to attract other matter”. Newton was denounced as trying to smuggle occultism into science. How much do you discount a theory for having occult elements? If some conception of quantum theory predicts the data beautifully, but says matter behaves differently depending on whether someone’s watching it or not, is that okay? What if it says that a certain electron has a 50% chance of being in a certain place, full stop, and there is no conceivable explanation for which of the two possibilities is realized, and you’re not even allowed to ask the question? What if my explanation for dark matter is “invisible gremlins”? How do you figure out when you need to relax your assumptions about what counts as science, versus when somebody is just cheating?
A less dramatic example: Lavoisier’s theory of combustion boasts an ability to explain why some substances gain weight when burned; they are absorbing oxygen from the air. A brilliant example of an anomaly explained, which proves the superiority of combustion theory to other paradigms that cannot account for the phenomenon? No – “things shouldn’t randomly gain weight” comes to us as a principle of the chemical revolution of which Lavoisier was a part:
In the seventeenth century, [an explanation of weight gain] seemed unnecessary to most chemists. If chemical reactions could alter the volume, color, and texture of the ingredients, why should they not alter weight as well? Weight was not always taken to be the measure of quantity of matter. Besides, weight-gain on roasting remained an isolated phenomenon. Most natural bodies (eg wood) lose weight on roasting as the phlogiston theory was later to say they should.
In previous paradigms, weight gain wasn’t even an anomaly to be explained. It was just a perfectly okay thing that might happen. It’s only within the constellation of new methods and rules we learned around Lavoisier’s time, that Lavoisier’s theories solved anything at all.
So how do scientists ever switch paradigms?
Kuhn thinks it’s kind of an ugly process. It starts with exasperation; the old paradigm is clearly inadequate. Progress is stagnating.
Awareness [of the inadequacy of geocentric astronomy] did come. By the thirteenth century Alfonso X could proclaim that if God had consulted him when creating the universe, he would have received good advice. In the sixteenth century, Copernicus’ coworker, Domenico da Novara, held that no system so cumbersome and inaccurate as the Ptolemaic had become could possibly be true of nature. And Copernicus himself wrote in the Preface to the De Revolutionibus that the astronomical tradition he inherited had finally created only a monster.
Then someone proposes a new paradigm. In its original form, it is woefully underspecified, bad at matching reality, and only beats the old paradigm in a few test cases. For whatever reason, a few people jump on board. Sometimes the new paradigm is simply more mathematically elegant, more beautiful. Other times it’s petty things, like a Frenchman invented the old paradigm and a German the new one, and you’re German. Sometimes it’s just that there’s nothing better. These people gradually expand the new paradigm to cover more and more cases. At some point, the new paradigm explains things a little better than the old paradigm. Some of its predictions are spookily good. The old paradigm is never conclusively debunked. But the new paradigm now has enough advantages that more and more people hop on the bandwagon. Gradually the old paradigm becomes a laughingstock, people forget the context in which it ever made sense, and it is remembered only as a bunch of jokes about dormitive potency.
But now that it’s been adopted and expanded and reached the zenith of its power, this is the point at which we can admit it’s objectively better, right?
For a better treatment of this question than I can give, see Samzdat’s Science Cannot Count To Red. But my impression is that Kuhn is not really willing to say this. I think he is of the “all models are wrong, some are useful” camp, thinks of paradigms as models, and would be willing to admit a new paradigm may be more useful than an old one.
Can we separate the fact around which a paradigm is based (like “the Earth orbits the sun”) from the paradigm itself (being a collection of definitions of eg “planet” and “orbit”, ways of thinking, mathematical methods, and rules for what kind of science will and won’t be accepted)? And then say the earth factually orbits the sun, and the paradigm is just a useful tool that shouldn’t be judged objectively? I think Kuhn’s answer is that facts cannot be paradigm-independent. A medieval would not hear “the Earth orbits the sun” and hear the same claim we hear (albeit, in his view wrong). He would, for example, interpret it to mean the Earth was set in a slowly-turning crystal sphere with the sun at its center. Then he might ask – where does the sphere intersect the Earth? How come we can’t see it? Is Marco Polo going to try to travel to China and then hit a huge invisible wall halfway across the Himalayas? And what about gravity? My understanding is the Ptolemaics didn’t believe in gravity as we understand it at all. They believed objects had a natural tendency to seek the center of the universe. So if the sun is more central, why isn’t everything falling into the sun? To a medieval the statement “the Earth orbits the sun” has a bunch of common-sense disproofs everywhere you look. It’s only when attached to the rest of the Copernican paradigm that it starts to make sense.
This impresses me less than it impresses Kuhn. I would say “if you have many false beliefs, then true statements may be confusing in that they seem to imply false statements – but true statements are still objectively true”. Perhaps I am misunderstanding Kuhn’s argument here; the above is an amalgam of various things and not something Kuhn says outright in the book. But whatever his argument, Kuhn is not really willing to say that there are definite paradigm-independent objective facts, at least not without a lot of caveats.
So where is the point at which we admit some things are objectively true and that’s what this whole enterprise rests on?
Kuhn only barely touches on this, in the last page of the book:
Anyone who has followed the argument this far will nevertheless feel the need to ask why the evolutionary process should work. What must nature, including man, be like in order that science be possible at all? Why should scientific communities be able to reach a firm consensus unattainable in other fields? Why should consensus endure across one paradigm change after another? And why should paradigm change invariably produce an instrument more perfect in any sense than those known before? From one point of view those questions, excepting the first, have already been answered. But from another they are as open as they were when this essay began. It is not only the scientific community that must be special. The world of which that community is a part must also possess quite special characteristics, and we are no closer than we were at the start to knowing what these must be. That problem— What must the world be like in order that man may know it?— was not, however, created by this essay. On the contrary, it is as old as science itself, and it remains unanswered. But it need not be answered in this place.
Aaargh! So close!
A lot of the examples above are mine, not Kuhn’s. Some of them even come from philosophy or other nonscientific fields. Shouldn’t I have used the book’s own examples?
Yes. But one of my big complaints about this book is that, for a purported description of How Science Everywhere Is Always Practiced, it really just gives five examples. Ptolemy/Copernicus on astronomy. Alchemy/Dalton on chemistry. Phlogiston/Lavoisier on combustion. Aristotle/Galileo/Newton/Einstein on motion. And ???/Franklin/Coulomb on electricity.
It doesn’t explain any of the examples. If you don’t already know what Coulomb’s contribution to electricity is and what previous ideas he overturned, you’re out of luck. And don’t try looking it up in a book either. Kuhn says that all the books have been written by people so engrossed in the current paradigm that they unconsciously jam past scientists into it, removing all evidence of paradigm shift. This made parts of the book a little beyond my level, since my knowledge of Coulomb begins and ends with “one amp times one second”.
Even saying Kuhn has five examples is giving him too much credit. He usually brings in one of his five per point he’s trying to make, meaning that you never get a really full view of how any of the five examples exactly fit into his system.
And all five examples are from physics. Kuhn says at the beginning that he wished he had time to talk about how his system fits biology, but he doesn’t. He’s unsure whether any of the social sciences are sciences at all, and nothing else even gets mentioned. This means we have to figure out how Kuhn’s theory fits everything from scattershot looks at the history of electricity and astronomy and a few other things. This is pretty hard. For example, consider three scientific papers I’ve looked at on this blog recently:
– Cipriani, Ioannidis, et al perform a meta-analysis of antidepressant effect sizes and find that although almost all of them seem to work, amitriptyline works best.
– Ceballos, Ehrlich, et al calculate whether more species have become extinct recently than would be expected based on historical background rates; after finding almost 500 extinctions since 1900, they conclude they definitely have.
– Terrell et al examine contributions to open source projects and find that men are more likely to be accepted than women when adjusted for some measure of competence they believe is appropriate, suggesting a gender bias.
What paradigm is each of these working from?
You could argue that the antidepressant study is working off of the “biological psychiatry” paradigm, a venerable collection of assumptions that can be profitably contrasted with other paradigms like psychoanalysis. But couldn’t a Hippocratic four-humors physician of a thousand years ago done the same thing? A meta-analysis of the effect sizes of various kinds of leeches for depression? Sure, leeches are different from antidepressants, but it doesn’t look like the belief in biological psychiatry is affecting anything about the research other than the topic. And although the topic is certainly important, Kuhn led me to expect something more profound than that. Maybe the paradigm is evidence-based-medicine itself, the practice of doing RCTs and meta-analyses on things? I think this is a stronger case, but a paradigm completely divorced from the content of what it’s studying is exactly the sort of weird thing that makes me wish Kuhn had included more than five examples.
As for the extinction paper, surely it can be attributed to some chain of thought starting with Cuvier’s catastrophism, passing through Lyell, and continuing on to the current day, based on the idea that the world has changed dramatically over its history and new species can arise and old ones disappear. But is that “the” paradigm of biology, or ecology, or whatever field Ceballos and Lyell are working in? Doesn’t it also depend on the idea of species, a different paradigm starting with Linnaeus and developed by zoologists over the ensuing centuries? It look like it dips into a bunch of different paradigms, but is not wholly within any.
And the open source paper? Is “feminism” a paradigm? But surely this is no different than what would be done to investigate racist biases in open source. Or some right-winger looking for anti-Christian biases in open source. Is the paradigm just “looking for biases in things?”
What about my favorite trivial example, looking both ways when you cross the street so you don’t get hit by a bus? Is it based on a paradigm of motorized transportation? Does it use assumptions like “buses exist” and “roads are there to be crossed”? Was there a paradigm shift between the bad old days of looking one way before crossing, and the exciting new development of looking both ways before crossing? Is this really that much more of a stretch than calling looking for biases in things a paradigm?
Outside the five examples Kuhn gives from the physical sciences, identifying paradigms seems pretty hard – or maybe too easy. Is it all fractal? Are there overarching paradigms like atomic theory, and then lower-level paradigms like organic chemistry, and then tiny subsubparadigms like “how we deal with this one organic compound”? Does every scientific experiment use lots of different paradigms from different traditions and different levels? This is the kind of thing I wish Kuhn’s book answered instead of just talking about Coulumb and Copernicus over and over again.
In conclusion, all of this is about predictive coding.
It’s the same thing. Perception getting guided equally by top-down expectations and bottom-up evidence. Oh, I know what you’re thinking. “There goes Scott again, seeing predictive coding in everything”. And yes. But also, Kuhn does everything short of come out and say “When you guys get around to inventing predictive coding, make sure to notice that’s what I was getting at this whole time.”
Don’t believe me? From the chapter Anomaly And The Emergence Of Scientific Discovery (my emphasis, and for “anomaly”, read “surprisal”):
The characteristics common to the three examples above are characteristic of all discoveries from which new sorts of phenomena emerge. Those characteristics include: the previous awareness of anomaly, the gradual and simultaneous emergence of both observational and conceptual recognition, and the consequent change of paradigm categories and procedures often accompanied by resistance.
There is even evidence that these same characteristics are built into the nature of the perceptual process itself. In a psychological experiment that deserves to be far better known outside the trade, Bruner and Postman asked experimental subjects to identify on short and controlled exposure a series of playing cards. Many of the cards were normal, but some were made anomalous, e.g., a red six of spades and a black four of hearts. Each experimental run was constituted by the display of a single card to a single subject in a series of gradually increased exposures. After each exposure the subject was asked what he had seen, and the run was terminated by two successive correct identifications.
Even on the shortest exposures many subjects identified most of the cards, and after a small increase all the subjects identified them all. For the normal cards these identifications were usually correct, but the anomalous cards were almost always identified, without apparent hesitation or puzzlement, as normal. The black four of hearts might, for example, be identified as the four of either spades or hearts. Without any awareness of trouble, it was immediately fitted to one of the conceptual categories prepared by prior experience. One would not even like to say that the subjects had seen something different from what they identified. With a further increase of exposure to the anomalous cards, subjects did begin to hesitate and to display awareness of anomaly. Exposed, for example, to the red six of spades, some would say: That’s the six of spades, but there’s something wrong with it— the black has a red border. Further increase of exposure resulted in still more hesitation and confusion until finally, and sometimes quite suddenly, most subjects would produce the correct identification without hesitation. Moreover, after doing this with two or three of the anomalous cards, they would have little further difficulty with the others. A few subjects, however, were never able to make the requisite adjustment of their categories. Even at forty times the average exposure required to recognize normal cards for what they were, more than 10 per cent of the anomalous cards were not correctly identified. And the subjects who then failed often experienced acute personal distress. One of them exclaimed: “I can’t make the suit out, whatever it is. It didn’t even look like a card that time. I don’t know what color it is now or whether it’s a spade or a heart. I’m not even sure now what a spade looks like. My God!” In the next section we shall occasionally see scientists behaving this way too.
Either as a metaphor or because it reflects the nature of the mind, that psychological experiment provides a wonderfully simple and cogent schema for the process of scientific discovery.
And from Revolutions As Changes Of World-View:
Surveying the rich experimental literature from which these examples are drawn makes one suspect that something like a paradigm is prerequisite to perception itself. What a man sees depends both upon what he looks at and also upon what his previous visual-conceptual experience has taught him to see. In the absence of such training there can only be, in William James’s phrase, “a bloomin’ buzzin’ confusion.” In recent years several of those concerned with the history of science have found the sorts of experiments described above immensely suggestive.
If you can read those paragraphs and honestly still think I’m just just irrationally reading predictive coding into a perfectly innocent book, I have nothing to say to you.
I think this is my best answer to the whole “is Kuhn denying an objective reality” issue. If Kuhn and the predictive coding people are grasping at the same thing from different angles, then both shed some light on each other. I think I understand the way that predictive coding balances the importance of pre-existing structures and categories with a preserved belief in objectivity. If Kuhn is trying to extend the predictive coding model of the brain processing information to the way the scientific community as a whole processes it, then maybe we can import the same balance and not worry about it as much.
Nice synopsis of Kuhn! Especially liked your chemistry analogies — “if someone tells you that a certain chemical has changed into a different chemical because it got colder,” etc.
I think an important difference between philosophy and physical science is that it’s easy to gauge in a relatively general (ha) way how and why a paradigm supersedes another. Specifically, Aristotle < Newton < Einstein because each paradigm explains progressively more observable phenomena with progressively cogent logic, and rational scientists will abandon Einstein the same way they abandoned Newton when the appropriate model arrives. You may be tempted to note that the accepted postulates of a paradigm could potentially be arbitrarily modified in a way that does not alter its epistemological weight but would be preposterous to the perceived beauty of the paradigm. The quantum paradigm, for example, has so many free parameters as to admit [over a dozen](https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics) interpretations. Even here, it's not hard to convince scientists that as a paradigm (or maybe a representative element of a class of paradigms) quantum theory is a better explanation than what we had before in terms of the logical finesse and solid utility it provides. This case is kind of interesting because many scientists wanted one or more paradigms, such as MSSM and its myriad variants or even string theory to replace it. So far none have proved able to provide a tactile, falsifiable claim or point humanity's most powerful scientific equipment to undiscovered phenomena required by any of the proposed paradigms.
Moving from Kant to Wittgenstein (or whatever, I'm no philosopher) is not like moving from Newton to Einstein. Einstein unambiguously stood on Newton's shoulders and the path from one to the other is decisively demonstrable in a few dozen pages of derivation. If translating between one philosopher to another is an order of magnitude greater in less savory language, then either I've successfully chosen a happier intellectual pathway, or I'm missing out on a universe of interesting things to think about.
It sounds like you are conflating quantum mechanics with the standard model. Quantum is a paradigm. It has no free parameters. It’s not even really what physicists today would call a theory. Rather it’s a sense of what sort of objects a theory should account for, what sort of questions a theory should be able to answer, what counts as a description of a system, etc.
The standard model, on the other hand is not a paradigm. It has several free parameters. It is a specific example of a quantum theory.
The free parameters of the standard model have nothing to do with interpretations of quantum mechanics.
Someone ought to do a study on paradigm shifts alone. Pure objectivity may not exist if the term “objective” was created by biased human minds. Paradigms are likely to change for many reasons, including the reason that humans create paradigms to begin with, and determine what may or may not be “objective.” Is there such a thing as being “too objective” in science? Can being too objective be part of the reason for a need to change paradigms? I may not have the intelligence or knowledge that many of you here have, but I wanted to try and ask these questions.
In the post above, the one passage that stuck out for me was this: “It doesn’t explain any of the examples. If you don’t already know what Coulomb’s contribution to electricity is and what previous ideas he overturned, you’re out of luck. And don’t try looking it up in a book either. Kuhn says that all the books have been written by people so engrossed in the current paradigm that they unconsciously jam past scientists into it, removing all evidence of paradigm shift. This made parts of the book a little beyond my level, since my knowledge of Coulomb begins and ends with ‘one amp per second’.” I reacted to your statement about “removing all evidence of paradigm shift” when considering my questions above. The processes for which science and its scientists innovate include, IMHO, the same processes for which paradigms are invented and shifted.
It’s important to remember that Kuhn wrote this seven decades ago. It was one of the most influential books of pop philosophy in the 1960s-70s, influencing the counterculture of the time, so it is very much “in the water supply.” Much of what’s right in it is now obvious; what’s wrong is salient. To make sense of the book, you have to understand the state of the philosophy of science before then (logical positivism had just conclusively failed), and since then (there has been a lot of progress since Kuhn, sorting out what he got right and wrong).
The issue of his relativism and attitude to objectivity has been endlessly rehashed. The discussion hasn’t been very productive; it turns out that what “objective” means is more subtle than you’d think, and it’s hard to sort out exactly what Kuhn thought. (And it hasn’t mattered what he thought, for a long time.)
Kuhn’s “Postscript” to the second edition of the book does address this. It’s not super clear, but it’s much clearer than the book itself, and if anyone wants to read the book, I would strongly recommend reading the Postscript as well. Given Scott’s excellent summary, in fact I would suggest *starting* with the Postscript.
The point that Kuhn keeps re-using a handful of atypical examples is an important one (which has been made by many historians and philosophers of science since). In fact, the whole “revolutionary paradigm shift” paradigm seems quite rare outside the examples he cites. And, overall, most sciences work quite differently from fundamental physics. The major advance in meta-science from about 1980 to 2000, imo, was realizing that molecular biology, e.g., works so differently from fundamental physics that trying to subsume both under one theory of science is infeasible.
As for “trying to extend the predictive coding model”… that didn’t exist yet. I think you realize that, but… it doesn’t make sense to accuse him of this. (Also I think it has little to do with what he was doing; but I also don’t have a high opinion of that model, so.)
Here’s a PDF with the relevant postscript for those interested, pages 174-210.
It had become unfashionable. People were flocking to Quine or the late Wittgenstein, and pretending it was because Logical Positivism had been conclusively refuted. Later, their students, who never read the positivists, have often uncritically accepted this version of history. Kuhn also didn’t help; the “positivist” in his book is an absurd strawman (or perhaps represents people in the history of science rather than philosophy of science? I don’t know who was big in history of science in the mid 20th century). Particularly ironic as The Structure of Scientific Revolutions was first published as part of the second volume of the International Encyclopedia of Unified Science, a Logical Positivist project.
Would you agree with wikipedia that this is a reasonable description of logical positivism?
From this perspective, it seems like LP would imply that “vanilla tastes different from cinnamon” is a meaningless statement. Nonetheless, pretty much any human being in the history of the world who has been exposed to both substances would agree that “vanilla tastes different from cinnamon” is both meaningful and undeniably true.
I think any philosophical theory whose implications disagree with the experiences of every human being in the history of the earth is pretty fairly described as “conclusively refuted”.
Obvious objection is obvious: “But cinnamon and vanilla have empirically distinct chemical structures, and therefore the statement is meaningful under LP!” First, you have to empirically prove that distinct chemical structures cannot induce an identical subjective sensation of flavor. Good luck with that.
More subtle objection: “Well, almost every human whose ever lived experiences sunrise and sunset, but we know those experiences are misleading descriptions of the earth’s revolution from a geocentric perspective.” Yes, but “earth” and “sun” are presumably physical objects whose orientation can be investigated through non-subjective (or less subjective) means, whereas the flavor of vanilla and cinnamon can only be investigated purely subjectively.
Less obvious objections: Well, maybe, but aren’t you just adding epicycles to LP at this point?
I’m curious as to what bizarre chain of reasoning led you to think that tasting something does not count as empirical observation.
Please empirically demonstrate that anyone has ever tasted anything at all.
Obvious objection: “People frequently talk about subjective experiences of flavor in consistent ways.” Please empirically demonstrate the difference between dualistic talk about subjective experience of flavor from epiphenomenal talk about the subjective experience of flavor. A solution to the hard problem of consciousness would be pretty darned neat!
I don’t know what you mean by “empirically demonstrate” here, but it doesn’t seem to have anything to do with what the Logical Positivists meant. I’d ask why tasting was used as an example when it seems like whatever point you are trying to make ought to apply equally well to vision, but I fear the tangle of confusions is too great to make any progress in a comment thread.
I think “empirically demonstrate” is pretty simple, right? (At least, the LPs seemed to think so.) Provide (preferably conclusive) observational evidence that human beings subjectively experience flavor. Warning: you may have to solve the hard problem of consciousness in the process.
Somewhat arbitrarily. I think it’s easier to drive intuition with flavor since it’s harder to decompose flavors into simpler experiences than it is to decompose vision into simpler experiences.
Is there some important reason I should have used vision instead?
I hope you’re willing to consider the possibility that you may be the one who is confused.
Trust me, this is conclusive proof that you know nothing about the Logical Positivists.
Why should I trust you? Shouldn’t I expect you to empirically demonstrate it? 🙂
In other words, I’d really appreciate references to show me how wrong I am rather than you simply asserting it is so.
Also, that was a parenthetical statement, and refuting it does not refute the rest of my argument that I can see. Can you address my argument instead of my snide, likely unwarranted aside?
I don’t know of a really good Logical Positivism for Dummies (Ayer’s Language, Truth and Logic has historically been treated as that, but contains a number of Ayer idiosyncracies as well as some mistakes that go beyond the inevitable oversimplifications of such an effort). The Library of Living Philosophers volume on Carnap is one of the best volumes in the series; I suppose I could recommend Carnap’s reply to his critics from that volume, though that isn’t particularly entry level. Or there’s the recent scholarship by people like Michael Friedman I mention elsewhere in this thread.
Thanks, that’s helpful.
While we’re at it, you seem to have claimed that “vanilla tastes different from cinnamon” is meaningful, i.e. empirically verifiable.
Can you provide an empirical verification for it? Is it just that people tend to claim as much verbally?
That’s what we rely on for nearly every case of empirical verification. I still think that you are importing other issues which have nothing to do with what the Logical Positivists were talking about, but you’re also giving me the impression you’re confused about those issues. I do not want to embark on the complex and surely hopeless task of simultaneously discussing your mistaken theories about consciousness and your mistaken idea about how they are connected with Logical Positivism. It’s too unlikely that everything could be kept straight in any way that would lead to meaningful progress.
Your position seems to imply that nothing can be verified empirically. Or is there a reason that you apply this to taste but not observations made with other senses?
Aside from this being really rude and arrogant…
Are there demonstrably non-mistaken theories of consciousness out there? Not sure on what basis you’d conclude that my “theories” (they’re actually fairly obvious and uncontroversial observations) about consciousness are definitely mistaken.
Fair enough. From the forgoing, I think it’s too unlikely that you would ever consider changing your mind about anything.
If you don’t see the problem with this, then I don’t know how to help you. I’ll just mention that a lot of people verbally attest to subjective experiences of God, but that is not typically treated as empirical verification of God’s existence.
Correct. I think that “empirical verification” works as a common-sense quick’n’dirty approach to knowledge (largely because humans seem to have broadly consilient subjective experiences), but if you poke it hard enough it falls apart. It’s not a strictly rigorous concept.
In fact, a lot of people seem to think that Popper was right that empirical verification is impossible and you have to rely on conjecture and falsifiability.
I’m not necessarily endorsing falsifiability or fallibilism, just pointing out that I’m hardly alone in thinking that “empirical verification” is a fraught concept.
I mentioned before — it’s easier to drive intuition with simpler examples, and taste can’t be decomposed into simpler concepts as easily as vision or hearing. Other than that, consider it an arbitrary example.
This is really the basic problem; we clearly have wildly divergent ideas about what is obvious and uncontroversial (and, for that matter, what are observations, apropos of the present topic).
The only “theories” of consciousness that I’ve used so far are the problem of other minds and the hard problem of consciousness. I think it makes sense to call these “observations” rather than “theories” as neither is an explanation (they’re both called “problems” for a reason). I don’t think the existence of either of these problems is especially controversial. The problem of other minds seems obvious to me, though perhaps the hard problem of consciousness is not so obvious. “What Is It Like To Be a Bat?” makes it seem obvious to me in retrospect, but I suppose I was much more receptive to logical positivism before I read it.
As far as the relation of these ideas to logical positivism — you’re right! I haven’t read much about logical positivism from its proponents (just a few papers by Carnap here and there probably), and I don’t know what they’ve said about these particular epistemological problems. I’ll read more Carnap before running my mouth again.
On a personal note — I think you should strive to be a little more charitable. Throughout this discussion, you’ve consistently “argued” by asserting I am ignorant and confused rather than discussing the ideas directly. While this is not exactly uncommon in the field of philosophy, I do think it’s somewhat contrary to the spirit of it.
I’m not familiar with logical positivists so not sure what they would have replied to this, but empirically demonstrating that cinnamon and vanilla taste different seems straightforward to me. Simply have a bunch of people taste a cinnamon flavor and a vanilla flavor, and ask if they taste any difference. If they do, the differing tastes have been empirically demonstrated.
This isn’t even a contrived thought experiment setup, it’s the kind of setup that’s actually in use for empirically testing things like “can people actually tell the taste of Coke and Pepsi apart if you don’t tell them which one to expect”. Or for whether tea tastes different depending on which of tea or milk is added to the cup first.
I don’t think that works just like that, because there’s the placebo affect or people just imagining themselves a difference or indeed there being a real difference between mouthful A and mouthful B simply from B being second.
I think you’d need to be more sophisticated than ‘ Simply have a bunch of people taste a cinnamon flavor and a vanilla flavor, and ask if they taste any difference.’
Placebo effects are empirically measured effects. It seems like you are using the word “empirical” to stand in for some other concept.
I did not use the word ‘Empirical’.
I do not disagree with Kaj because i think having a bunch of people taste it is demonstrating in a fashion non-emperical, but that it does not demonstrate it for there are too many other perfectly reasonable possibilities and that the empirical observation ‘people telling you that taste differently’ would be observed whether there was or was not a actual taste difference.
Wouldn’t a well-designed study on taste control for that in a number of ways? Different groups would taste in different orders, some would taste the same thing twice, etc.
I seem to have you mixed up a by with wysinwygymmv, so apologies for that.
Keeping people blinded about what they’re tasting should address the placebo problem. In the tea experiment I linked, four cups were prepared in one way and four cups were prepared in another, and the lady was then given them in a random order and asked to identify them. After she had correctly identified all eight based on taste alone, it was concluded that they must indeed taste different.
I take subjective experiences of God to be evidence for His existence, and reports of such to be so also, in the same way that perceiving an object with any of my other senses is evidence for its existence, and reports (by others) of such perception is likewise evidence. Though I would more naturally cast this in Bayesian terms — such experiences, and reports of such experiences, are more likely in worlds in which God exists.
As an aside, I appreciated your questions in this thread. Though I think I agree with Protagoras on the subject at hand, I don’t agree with his uncharitable dismissal of you, and your comments prompted a useful discussion.
Perhaps the most obvious objection is that logical positivism claims that all meaningful statements must be either empirical or logical-deductive. Anything else is meaningless nonsense.
This claim is not obviously either empirical nor logical-deductive…
Apart from that, the entire program was breaking apart – sense-datum theory clearly didn’t work out, and nothing was in fact produced. Karl Popper had ripped them a new one (if politely) long before Kuhn came along.
The verification principle is logical, and sense-datum theory was always controversial within Logical Positivism, it was never a central assumption of the movement. There’s been some interesting scholarly work on this recently, with Michael Friedman being one of the most prominent people involved. Anyone genuinely interested might want to check out his Reconsidering Logical Positivism, or the Cambridge Companion to Carnap (which he edited).
That is interesting.
I am a confused by it, could you give a example of a sentence which seem intuitively meaningful, but aren’t either empirical nor logical ? (to have a example where the principle allow to eliminate a false negative)
I don’t know what seems intuitively meaningful to you. Carnap used Heidegger as an example of metaphysical nonsense; one of the sentences he quoted was “the nothing is more fundamental than the not and the negation.” Ayer took a sentence from Bradley, “the Absolute enters into, but is not itself capable of, evolution and progress.” If those are not intuitively meaningful to you, I don’t know that I have clearer examples. I can think of statements that almost anyone would say are meaningful that the LPists would complain about, but they aren’t really clearer, they just involve ambiguity rather than being controversially menaingful. For example, “God exists” is, in their view, extremely ambiguous; it is sometimes meaningless, sometimes verifiable and false, sometimes verifiable and true (e.g. if said by a pantheist), indeed sometimes perhaps a logical claim considering how diverse and vague people’s conceptions of God have been.
Ok now I see what confused me, the problem is that to know if a sentence is logical, we need to first know the meaning of the sentence.
And to know the meaning of the sentence, it should have meaning.
I think the first sentence you gave is meaningless, because it contains “type errors” and so the best I can do is to try to guess what the author wanted to say. (but I just don’t get it).
The second sentence seems clearer to me, but then I would place the meaning I get from it to be of the logical kind, I would rephrase it likes that :
“It fallows from the properties of the Absolute, that it is part of the things which change, but don’t changes itself” (like atoms which are part of chemical reactions, without changing themselves)
(I assume the Absolute to be a meaningful and abstract concept here. Otherwise I just consider every sentence containing a meaningless term, to be meaningless)
It seems to me that as soon as I understand a sentence I could at least give it a logical meaning.
And if the sentence are meaningless, I can’t understand it. (and I have to guess the intentions of the author)
But there are sentences I don’t understand, and am unsure are meaningful or not. And if I don’t understand them I can’t know if they are of the logical kind or not.
So if I want a criteria, I will need a criteria which don’t rely on the global meaning of the sentence, but on the structure and meaning of its parts.
Now I can also interpret too much, and give a meaning to a sentence without one, but that would not be very different than misunderstanding a meaningful sentence.
Huh, I expected this comment to contain a link to your excellent post on ontological remodeling over at Meaningness, which coincidentally I read right before reading this. I think it clarifies a lot of issues raised in this post, such as what a paradigm might be.
I haven’t read Kuhn and I don’t know whether I’m interpreting em correctly, but to me it seems not that simple at all.
Saying there is an objective reality doesn’t explain why this reality is comprehensible. In statistical learning theory there are various analyses of what mathematical conditions must hold for it to be possible to learn a model from observations (i.e. so that you can avoid the no-free-lunch theorems) and how difficult it is to learn it, and when you add computational complexity considerations into the mix it becomes even more complicated. Our understanding of these questions is far from complete.
In particular, our ability to understand physics seems to rely on the hierarchical nature of physical phenomena. You can discover classical mechanics without knowing anything about molecules or quantum physics, you can discover atomic and molecular physics while knowing little about nuclear physics, and you can discover nuclear and particle physics without understanding quantum gravity (i.e. what happens to spacetime on the Planck scale). If the universe was s.t. it is impossible to compute the trajectory of a tennis ball without string theory, we might have never discovered any physics.
On the other hand, I’m sometimes afraid we may be hitting exactly this kind of wall now with quantum gravity (or whatever is going on at that scale). It seems that there are too many things going on at once, for which we cannot make nice and isolated experiments. In the predictive coding picture, there are a lot of anomalies to be explained, and we may be just overwhelmed by the confusion.
I mostly just hope that like every other time in the past, it is not the case!
I think this is also true of most other sciences. For example, you can learn a lot about animal behavior or development or anatomy without knowing anything about evolution. It’s just that evolution helps explain a lot of what you end up seeing there.
Our minds and attention and knowledge are limited, so it’s hard to make much progress in science unless we can examine a fairly small and tractable system at a time, and make sense of it without needing to know everything about the universe around it. that knowledge makes it possible to develop new models, and also constrains them–your shiny new “evolution” model needs to still be consistent with the stuff we have already learned.
[ETA: I suspect this is one source of difficulty in the social sciences–it’s much harder to get down to a small and isolated enough realm that you can learn about it without needing to think about all the other random stuff in the surrounding environment.]
Wait, what? Then how do we ever see the horizon, then?
I’m like 90% sure that the paradigm here is “discrimination is bad”. This actually increases my confidence in paradigm theory a bit.
Edit: More than just “bad”; there’s often an implicit assumption that if X is discriminated against, that proves they are being semi-deliberately sabotaged by the not-Xs. Hence the persistent idea that it’s incoherent to propose that anything an X does might be part of the discrimination against Xs – if an X does it that proves it’s not actually anti-X, since what X would hate Xs?
Apparently the rate of refraction depends on the height of the measurement (because that is correlated with the density of the air) and atmospheric conditions (there was probably a temperature inversion at the place and time the experiments were conducted).
Yeah, this can’t be how it turned out, because that would have prevented anyone from noticing that the Earth wasn’t flat in the first place.
For example, a primary piece of evidence that the Earth is round was the fact that ships arriving from beneath the horizon become visible from the top down (hence “beneath the horizon”). But this result would mean that that’s impossible; instead of ships vanishing below the horizon from the bottom up and appearing from the top down, there would be no horizon and no ship would ever appear or disappear.
Also, as Scott hinted at, their assumptions about what forms of merit are appropriate. And I would add assumptions about comprehensiveness of the methodology for determining merit too. What if men just work better in all male environments? What if this boost to productivity outweighs the benefit of hiring females? What if boosting productivity isn’t even always the goal? What if hiring females boosts productivity short term but causes team instability long-term? What if employees have an upwards effect on management and company culture and causes the company to change its functionality long-term? For example, hiring tons of women and the company becomes feminized and loses its competitiveness and vision. I am not saying that’s the case, but there are so many plausible reasons why there could be a male bias and the only reason we even care is because of feminist ideology which shifts the burden of proof via its own paradigm.
I don’t think the feminist paradigm is just discrimination is bad. I’d include some other beliefs.
Discrimination is the main/only driver of differences in outcomes.
This discrimination is mostly in the present (rather than being knock on effects of previous discrimination).
Promoting different gender roles (even if not mandatory) is bad.
Abuse against women is ingrained in our culture.
I think that’s a good way to put it. In general, I think that the modern Social Justice movement rests on the following assumptions:
* A person’s demographic group tells us most of (if not everything) we need to know about that person.
* Different demographic groups are engaged in oppression; the concept of “intersectionality” is an attempt to derive a fine-grained matrix of oppressor vs. oppressed for each combination of demographics.
* Oppressor groups enjoy unearned privileges (extracted from groups they are oppressing) and are thus unable to accurately model the thoughts, beliefs, and feelings of oppressed groups.
In aggregate, this means that discrimination is not some accidental error or a deliberate evil plot; rather, it is a baseline feature of our society that can never be fully removed. Since most social institutions — including art, science, politics, etc. — are dominated by oppressors (which makes sense, given their privilege), there’s no hope of change from within; the only solution is constant struggle and possibly physical segregation.
A Coulomb is not “one amp per second”, it’s one amp-second (in other words, one amp times one second).
Because an ampere (a unit of current) is the passage of one coulomb (a unit of charge) per second.
A = C / s is equivalent to C = A x s.
I really enjoyed this post, especially the perspective-shifting examples in section I, which helped me to get the concept of paradigm shifts, and specifically the existence of radically different but reasonable-at-the-time paradigms, at a gut level. (The conclusion was interesting too.)
I liked this take on it, from Steven Horst’s Cognitive Pluralism, where he’s quoting some of Kuhn’s later writing (the bolded parts are a quote within the quote):
EDIT: as always, someone else said it better:
With two important differences. Firstly, one reduces to the other. If we diligently plug in all the values for all the relativity equations for an object weighing 1 kg traveling at 1 m/s; and ignore any term too small to care about; we’ll end up with Newton’s Laws. This is not an accident, because of the second difference between Newtonianism/Einsteinianism and philosophical schools: falsifiability. We can stage an experiment that will tell us, not which equations are more true in some Platonic sense, but which of them better reflect reality. For example, we could look at the orbit of Mercury, plug in the values both ways, and check to see which model better matches the actual orbit we observe.
BTW, we can meanwhile keep using the much simpler Newtonian equations for most of our work, since we don’t need infinite precision; in fact, we can use blatantly false equations such as “E=mgh” — which assumes that the Earth is not only flat, but also the only planet in existence ! It’s not a “paradigm shift”, so much as a “paradigm shuffle”.
You can’t do either of those things with philosophy, AFAIK, though I could be wrong.
I agree that this is an important point, and true of most scientific paradigm shifts, but I don’t think it’s true of all. I don’t see how Copernican cosmology reduces to Ptolemaic cosmology under simplifying assumptions, for example.
They are both particular cases of a coordinate-independent formulation of mechanics, as we now know to exist. So I’d say the whole “Matryoshka” paradigm idea holds pretty well.
Also, Kuhn would deny that any such reduction actually takes place. He would say that the metaphysical claims, the concept of matter, and so on, are too different between the paradigms to be compared this way. “Einsteinian matter” and “Newtonian matter” are not the same thing, he would argue, nor is gravity, nor space, nor time. You’re not talking about the same thing, even when it sounds like you are.
I mean, Kuhn can argue that, but it doesn’t really matter (heh). In science, the only time we’re talking about actual science is when we’re writing equations. Everything else is just a colorful metaphor. For example, “F=ma” is science (give or take a few derivatives); “objects in motion tend to remain in motion unless acted upon by an outside force” is a metaphor. I think that Kuhn (and philosophers in general) can reach many interesting and poetic conclusions about these metaphors; but, at the end of the day, if you want to build a bridge, launch a satellite, or predict eclipses, all of that is irrelevant.
In terms of equations, though, it’d easy to see how a huge formula with terms for time dilation etc. reduces to a simpler one if all the extra terms are epsilon. Yes, the two equations are describing slightly different things, but given that none of them are 100% accurate anyway (since 100% accuracy does not exist), you’re free to pick the one that strikes the best balance between explaining the data and reducing computations.
That’s your paradigm-relative values talking. That you think “writing equations” is the be-all-end-all of science is not something shared by the other paradigms. (At first) many Continentals dismissed Newton’s attempt at “writing equations” since he was re-introducing occult forces (like gravity) and therefore not doing “actual science” (which consisted of reducing all motion to contact forces only).
Or, at least, that’s what my model of Kuhn would say.
I’m talking strictly about modern science, as it is practiced today; I do understand that natural philosophy, alchemy, etc. did exist (and arguably still do), but I don’t count them under the rubric of “science”.
I have no problem with other paradigms; I was merely pointing out that, within modern science, Newtonian mechanics and Relativity are much more closely related than Kuhn would have us believe; and that describing them as “paradigm shifts” is way too strong of a claim.
I could also argue that it is these features of modern science (reliance on unambiguous equations vs. natural-language metaphors, falsifiability, probabilistic models, etc.) that make its applications so spectacularly effective (as compared to e.g. alchemy or theology). Of course, one could always argue that “effectiveness” is a paradigm-specific metric; but IMO once you start down that road, you end up with some sort of a postmodern worldview where reality is not real and everything is discourse, which is somewhat less than useful.
The perfect opening for a point which I have been wanting to make for ages: utilitarianism is included in a certain type of Aristotelianism. For example, Aquinas’ vision of moral action requires that prudence be the guide for what course to take and in his system prudence is in correctly identifying what’s at stake, measuring the good and bad effects, and choosing the proper means.
I do not understand the aggression against utilitarianism by Thomist types, since utilitarianism can easily be incorporated into the Aristotelian science.
I’m not sure “Aristotelianism (or Thomism in this case) has something to say about making decisions, and it includes what happens as a result of them” gets you quite as far as equating the two. They really are structurally very different views and, unlike with Utilitarianism and its more contemporary Kantianism/rational-intuitionism stuff, approaches a lot of Ethics from a different angle entirely, with different questions.
They’re all responsive to the world, so it’s easy to find bits where they overlap. But the emphasis on various parts, and how they show up, is different.
I think it’s worth pointing out that despite the fact that how-falsifiable-is-this-really can be as a guide to disposing of some really stupid pseudoscience, neither Newtonianism nor Einsteinianism is any more falsifiable than it is verifiable.
The idea that falsifiability is an important demarcation criteria between science and not-science came mainly from Popper (via positivist stuff in the 1800s – the earlier one, not the logical positivists). The problem was that obviously any set of results could justify an indefinite number of theories, rather than just the one you liked. Popper suggested that the trick was that they might not be able to verify one theory specifically, but they could disprove one.
He didn’t do very well in this argument because the problem is symmetrical: negative results can be explained in an indefinite number of ways just as easily as positive ones.* Unfortunately for Popper this was pointed out very shortly after by Quine, and, embarrassingly for Popper, quite a bit earlier by Duhem. For whatever reason the idea stuck around outside of Philosophy of Science, but it doesn’t actually help much.
*[because in both cases no theory can be tested in isolation from an indefinite number of other things, any one of which could be the issue – from different scientific theories to theories about how to interpret data that’s being received to unrecognized confounded factors, and so on.)
I’m still confused by what you mean by ‘predictive coding’. Do you mean the general idea that both top-down expectations and bottom-up observations constrain perception?
Or do you mean a specific (Bayesian?) model of how the top-down expectations work, like the one you discussed when reviewing Surfing Uncertainty?
The first one has been on the table since Kant and seems kind of inevitable given e.g. optical illusions. The second one is a much stronger claim. So I’d like to be clear on which one you mean.
This seems like a fair question. I don’t really understand the details of the Surfing Uncertainty model, so I can’t be sure about this, but it doesn’t seem like the parallels you (Scott) draw here are significantly stronger if we assume that specific model than they are if we just accept the general idea.
(That doesn’t mean it’s wrong to refer to that model, if it’s the most detailed plausible theory we have of how the general thing actually works. But I think it is important to distinguish between the two, if only so that we don’t confuse evidence for the general idea with evidence for the specific theory.)
Here is how I understand predictive coding:
Suppose you have an alphabet composed of 27 letters (the familiar 26 plus a space). You are interested in encoding it in binary for transmission. Of course you want to use as few bits as possible. How might you go about doing this? The first suggestion would be to assign each letter a bit patter of equal length. In this case, your transmission will take 4.76 bits each. You realize that in english some letters occur much more frequently than others, and to devote the same number of bits to each is wasteful. You find a table recording letter frequencies in common english texts, and reassign the bit patterns to give shorter values to more common letters. In this way, you reduce the number of bits needed to 4.03 per letter on average. Next you realize that some letters are followed by others even more commonly than they appear in normal text. Encoding the bit patterns based not only on the letter in question, but also the previous one reduces your usage to 3.32 bits per letter.
Now we play a game. A person is asked to guess what the current letter is. We tell them if they got it right or wrong. The right answer advances the current letter. They might initially guess the letter ‘t’. If they are right, they might further guess ‘h’. Getting that wrong, they could try ‘a’, and so on. The answer to each question, being yes or no, encodes a single bit. We record how many questions they ask over some long text, and therefore find the number of bits per letter to be 1.93.
(This example derived from Science and Information Theory by Leon Brillouin)
In this latter game, we ask the participants to guess (predict) what a letter is, and therefore define an encoding (coding) for each letter. The method by which a person performs this prediction is twofold. First, they have some idea what the text is saying, and therefore what it will say next. Second, every time they receive a negative response, they realize the text is saying something slightly different than they guessed, and so change their prediction for future letters.
The use of bits highlights an important practical application of all this. When you see some text as I am writing here, you see 4.76 bits for every letter (more, because of capitalization and punctuation and what not). And yet you require only 1.93 bits in order to know what is being said. The extra 2.83 bits take the form of redundancy. If I made some spelling error, or you read what I said particularly quickly, you might miss one of the letters I intended to convey. Yet because you have so many extra unnecessary bits, you can recover what is lost. This is done the similarly to how it was done in our game. As you read, you expect some letter to come next. When you encounter a slightly unexpected letter, you would update your expectation to account for it. When you encounter a completely unexpected letter, you might ignore it and continue as if your expectation was met.
To tie into the card example. A playing card contains log2(52) = 5.7 bits of information. If you are flashed a playing card very quickly, you might only have enough time to get 5.7 bits of information out of it. In this case, you would be forced to assume it is a playing card. If you have more time to look at it, you might be able to extract more bits, but even then, you might so heavily expect a playing card, that you ignore other possibilities.
Going back to the game: A person is allowed to ask which letter is next. But what makes the answer a single bit doesn’t depend on the nature of the question, only the binary nature of the answer. We could permit any yes or no question and still count bits by the number of questions. We then get into the interesting game of what question to ask. If someone had no clue what letter would follow, and wanted to determine it as quickly as possible, they might ask if it appeared in a particular half of the possible letters or the other. Or if they feel sufficiently confident in their guess, they might guess two or more letters at a time. (Brillouin points out the value 1.93 for the number of bits per letter must be too high because we force the player to ask for the letter even when it is obvious)
Now the playing card. You ask: “is it red or no?”, (no), “is it spades or no?”, (no). The prevailing paradigm implies that you now have complete information on the suit. “then it must be clubs?” (no). Once you realize that these are fake playing cards, you ask about the color and the suit independently. One could do a treatment of paradigms in science in a similar way: “is it a particle?” (yes) “then it isn’t a wave?” (no). “wait what?”…
Have you heard about Ludwik Fleck, (e.g. “Genesis and Development of a Scientific Fact”, https://en.wikipedia.org/wiki/Ludwik_Fleck)? He was a predecessor to Kuhn, unfortunately somewhat forgotten, and in my opinion a much more nuanced and deeper philosopher (his contribution is actually acknowledged by Kuhn, though I can’t find the exact quote right now). Kuhn’s theory sounds in many respects like “poor man’s version of Fleck” (i.e. a watered-down version of a truly subversive epistemological argumenation). He also provides examples from biology ;).
Fleck is so good – it’s impressive as hell how early he wrote that. Earlier and better than Kuhn – it’s hardly an original observation that Kuhn is sloppy with his terminology and writing in general. If he didn’t want to be misinterpreted, he should have been a lot clearer in the first place!
A philosophy lecturer told me that it was found that Fleck’s book had been checked out of the library exactly once at a certain university – by Kuhn. No idea if that’s actually true.
I also think Imre Lakatos is a lot more actually useful than Kuhn (or Popper, for that matter – he’s something of the synthesis of the two).
I just wanted to say thank you for a fantastic blog. It often challenges my own layman views and beliefs, and even though I’m not able to follow everything, it feeds my head in the most delightful way.
I’ve always thought Iphigenia was closer to a trolley problem. Slaughter the innocent princess or abort the Trojan War?
The Greeks had many unsolved moral dilemmas, and these dilemmas make for true tragedy.
The Iphigenia case.
Orestes’ vengeance dilemma.
Antigone’s origins of morality dilemma.
The Melian Dialogue: what do the strong owe the weak?
I find this quote very interesting, because the paradigm everyone mocks (according to this) is the same paradigm current in medicine today.
Years ago, I started to have a problem with the skin on my fingertips peeling off. This got to the point where I consulted a doctor, and he told me “we call this desquamation, which means “it’s peeling”. We don’t know why, and there’s nothing you can do about it.” Eventually, it cleared up by itself. We don’t know why.
Similarly, we observe that collections of symptoms often co-occur, and we identify a particular disease by its collection of symptoms. Lupus was diagnosed by the simple system of “if any 4 of these symptoms are present, the patient has lupus”. My mother had a patient who complained to her of how she [the patient] had had three of the symptoms for a long time. With only three symptoms, she didn’t have lupus. Eventually the patient developed a fourth symptom. With four afflictions having fallen into place, not only did she have lupus, but the three symptoms of her earlier life, which hadn’t been lupus at the time, retroactively became symptoms of lupus.
Note that my story involves two different people (the patient and my mother) mocking the stupid paradigm for diagnosing lupus, and I personally agree that it’s stupid and deserves to be mocked. (And the wikipedia section I linked notes that the diagnostic criteria shouldn’t be used to diagnose individual patients, presumably because their logical failings are so apparent.) But the paradigm nevertheless stays in use because we don’t have anything better. Wikipedia offers no criteria that are good enough for the job of diagnosing an individual, just several sets of criteria that aren’t. Unsurprisingly, the inadequate criteria, being the only criteria, still get used.
A useful concept here is Primary X vs Secondary X. Primary X means X is happening, we’ll treat it and hope for the best. Secondary X means there’s something else going on, which has the secondary side effect of X – in which case you may treat X’s symptoms, but you usually want to address the primary cause. It’s pretty clear in the doctor’s head which is which, though sometimes it’s a bit annoying when they don’t say it out loud.
The cases I mentioned, of desquamation and lupus, are both Primary in your terms. The symptoms of lupus would be Secondary.
For most doctors, it’s very important that a mental model is useful, and not so important that it represents the absolute truth. Similarly, for most engineers, a Newtonian model of physics is more useful than an Einsteinian one. The underlying scientific paradigms could shift dramatically without changing much at all about how patients are diagnosed or bridges are built.
I would say that widespread behavioural norms, like using standardized diagnostic criteria or looking both ways before crossing the street, resemble scientific paradigms in how they spread and win acceptance, but ultimately they’re a very different kind of thing. They don’t claim to be “true” or to tell us anything about the fundamental nature of reality. Instead, their acceptance depends on being useful or safe or efficient — a much narrower and context-specific kind of claim.
I don’t think the concepts are as different as you say. If you complain to a doctor about the incoherent definition of lupus, they’ll tell you “yep, it’s stupid and everyone realizes that”. Nobody believes that lupus sufferers have a metaphysical lupus-generating quality which is stochastically realized as between four and eleven seemingly unrelated symptoms. Everyone believes that the symptoms of lupus have non-metaphysical causes. But nobody knows what those causes are. It’s an open question whether everyone currently suffering from “lupus” actually has the same problem, or whether different problems have been erroneously given the same label. It’s not that doctors think this state of affairs is fine. It’s that they can’t do any better.
Similarly, I don’t think the shift from “it is in the nature of trees to grow” to “it is in the nature of trees to grow, because they use chlorophyll to extract carbon from the air and incorporate it into themselves” is a paradigm shift so much as a natural development of the earlier observation. It’s not like science taught us that it wasn’t in the nature of trees to grow.
Your problem here is that you’re so in the modern paradigm that you’re reading Aristotlean statements to fit it, rather than understanding them as examples of a different paradigm.
The reason the doctors agree that the current situation with lupus is “stupid” is precisely because medicine hasn’t yet managed to fit it into the modern paradigm of causes. If the doctors were working from the Aristotelian paradigm of causes, they would say, “Lupus sufferers have a metaphysical lupus-generating quality”, just like it was the metaphysical nature of objects to want to come to rest at the center of the Earth.
But they don’t, they operate from the modern assumption that something is causing the lupus, and say, “We don’t know what causes lupus, so we have to use a rule of thumb about symptoms”. Just like when an object comes to rest, a physicist looks for what caused the object to cease in motion (say, air resistance), rather than taking it for granted that objects naturally come to rest.
That really is a paradigm shift; the belief that it is natural to elaborate past “it is in the nature of trees to grow”, and that the elaboration isn’t really finished until you at most have a handful of basic physical laws and particles left explaining why quarks and electrons combine into atoms that combine into molecules that combine into cells that combine into trees, and probably a reason why the quarks have the properties they do and how the physical laws all derive from one equation that fits on a T-shirt. When you’re in the modern paradigm, “it is in the nature of trees to grow” is a summary of the details, not a full explanation; but it was a full explanation in itself in the old paradigm.
If having the three symptoms is only likely with lupus, the diagnostic criteria are wrong. If having the three symptoms is likely with other diseases as well, then the diagnostic criteria are correct, and, concluding that she had lupus from the first three symptoms would have been a lucky guess, but still just a guess. The fact that making the lucky guess would have been correct in hindsight doesn’t justify saying “they should have made that lucky guess”.
I see an important difference between Aristotle and the doctors in your anecdotes. Aristotle, to the best of my knowledge, thought that “because it is their nature” was an explanation. The doctor who told you about desquamation, on the other hand, told you that he had no explanation.
As for lupus, there exists (presumably) an underlying cause named lupus, which exists regardless of whether it is diagnosed as such. The appearance of new evidence can change the diagnosis, but it does not travel back in time to change the cause.
I think the causal-propensities stuff is the sort of thing that gets very unfair to Aristotle very very quickly, partially because the structural difference is down a bit deeper than people think, and also because the Scholastics-to-Cartesians transition historically involved the Cartesians doing a lot of (unfair) mocking.
The difference is more along the lines of thinking in terms like “These objects have causal propensities (x) and (y), and those objects have causal propensities (y) and (z)”, as opposed to thinking in terms like “All objects have the same causal propensities but those propensities result in different effects based on what is around them or what they’re shaped like or something.”
One way of thinking really does work better for some cases than others, and vice versa. You can make fun of Aristotelian dormitive properties (although that was intended to make fun of someone trying to look like they knew something about science and not knowing anything at all), but it’s not like there aren’t equally many goofy or just wrong aspects to how Descartes went about it.
I’d like to point out that the Ptolemaic picture is not better in the sense of falsifiability/accuracy/etc than heliocentrism. The whole point of relativity is that either coordinate system can be used to express mechanics, just in a more or less convenient way. In fact, the original heliocentric description using perfect circles was worse than the existing epicycle formulations in geocentrism, which explains why it wasn’t so obvious that one needed to change. (The reason being that epicycles are a disguised form of Fourier decomposition, so one could in principle approximate any periodic motion in the sky with them and enough patience.) Heliocentrism can be mathematically more convenient, but also only once you have really worked out its details vs. the convenience of an already established paradigm. Saying that heliocentrism is wrong reminds me of this:
I’d say that in science new paradigms “phagocyte” the old ones, rather than just eliminate them. It’s not so antagonistic as some philosophers draw it. Many revolutions came not because there was an anomaly between a paradigm and some small experiment (which as you say is usually discarded as “Bayes tells me that it is more likely that you experiment has a flaw”), but because there were two big paradigms with decent foundations in their respective domains, and then somebody tried to extend them to make contact and the whole thing exploded. Then some genius thinks of a smooth way to connect both domains, and general happiness ensues.
Going with physics, electricity and magnetism didn’t really work as a unified theory, so Maxwell added the “displacement term” and hey, now we all know it’s just electromagnetism, and by the way, it also explains light. Mechanics and electromagnetism didn’t really like each other, so let’s change this transformation laws and hey, special relativity, and by the way time is not absolute. Special relativity and Newtonian gravity have serious frictions, so let’s rewrite gravity as a geometrical effect, and hey, general relativity, and by the way black holes and gravitational waves and cosmology. Electromagnetism and thermodynamics were not good friends, so let’s assume energy is quantized, and hey, quantum theory, and by the way now we know why some things like light look like waves and particles at the same time and also, what the hell is wrong with reality. Special relativity and quantum mechanics have very different politics, so let’s add some antimatter here and there, and hey, quantum electrodynamics, and by the way now particles can be created and destroyed. Quantum electrodynamics and the nuclear force look pretty similar but have wildly different distance behaviours, so let’s assume somebody else is giving them mass and hey, electroweak unification, and by the way, there you go a Higgs boson. Gravity and quantum field theory have some strong divergences and… well… it’s being worked on.
Light as a wave or all orbital descriptions being all equivalent or the Higgs boson creating mass are not so much some isolated idea that some illuminated guy throws around, not even because he’s seen some anomalous experimental data. It often comes as a consequence of trying to glue together two paradigms we know to be useful in their domains, which must still true as subregions of the new “super”model.
This is a good summary, but I want to reformulate or add a few things that got lost behind simply explaining the broad idea here.
It would be better to distinguish more clearly between schools and paradigms. Copernican astronomy, Newtonian mechanics and Predictive Coding are all schools. Only the former two were paradigms; that is, largely unchallenged and generally accepted. In the non- or prescientific stage medicine, psychology, … are currently in, there’s plenty of competing schools, and therefore no paradigm. What is required is an exemplar that sets the stage for a consolidation: a paradigmatic, i.e. paradigm-building, explanation for a phenomenon, after which everyone models their own explanations from hereon. For example (my example, not Kuhn’s), Darwin proposed a particular explanation for how the birds he found on the Galapagos islands got to have their beaks. Since then, a story about how something is in biology counts as an answer if and only if it has the same form as Darwin’s explanation.
Constructing such explanations following the form of the exemplar is the process of Normal Science, which a truly scientific discipline is mostly engaged in: solving puzzles. That sounds dismissive, but solving a puzzle might be as interesting as explaining how birds came about – not just on Galapagos, but in general – that is, they’re dinosaurs. Exciting!
I think the summary is also light on some of what Kuhn in particular was most interested in: in particular, incommensurability. Yes, Kuhn did indeed claim that we can make statements about the falsity of something only from within a certain paradigm (or school). Now Kuhn has plenty of inventory for talking about how a particular school might be thoroughly useless (i.e., it can be inconsistent and utterly fruitless) , but “empirically false from an objective, out-of-paradigm point of view” is not amongst them. In fact, it is inherent especially to a science following the highest standards that it is deeply embedded in one particular worldview, or one might say, ideology.
My favourite, and most mind-boggling, idea of Kuhn’s might be that a science could some day simply end. We mustn’t have this eternal cycle of revolution – normal science/paradigm – revolution; it could be that some paradigm simply fails to do its job, which is to generate new puzzles. Physicists might simply run out of puzzles to solve, and physics would just … stop. Would that mean we have discovered the ultimate truth to the universe? No, we would simply have a paradigm that’s stopped presenting us with puzzles, without accruing anomalies that would require a new paradigm.
There is a highly entertaining story by film maker Errol Morris about his time studying with Kuhn here: https://opinionator.blogs.nytimes.com/2011/03/06/the-ashtray-the-ultimatum-part-1/
Morris goes in depth on incommensurability and the weirdness of Kuhn’s ideas. He ends up to some extent blaming Kuhn for, of all things, the Trump presidency here: https://blogs.scientificamerican.com/cross-check/filmmaker-errol-morris-clarifies-stance-on-kuhn-and-trump/
Part of the confusion seems to come from the fact that there have been multiple editions of the book, and at least some people (Ian Hacking springs to mind) think the first edition reads as being a lot more radical than the second – I think mainly because of the postscript to the second edition.
Freeman Dyson has an anecdote where there was a lecture or conference or something where the book was being discusses, and Kuhn was in the audience, and at some point he got so annoyed of what people had made of his ideas that he shouted out “I am not a Kuhnian!” and stormed off.
Another example of a paradigm shift you should consider is the shift from behaviorist (Skinner) to generativist (Chomsky) linguistics. The former thought that language could be modeled as simple response to stimulus, while Chomsky insisted that speakers had to have non-trivial mental representations in order to correctly generate grammatical utterances. Today almost everyone agrees with Chomsky, but it’s worth remembering that the amount of evidence for this is surprisingly small, and that the behaviorist description covers quite a lot of observable behavior.
Or economics pre/post Kahneman & Tverski.
Or (evolutionary) psychology pre/post Tooby&Cosmides.
And if you lower expectations, you can see paradigm nudges a lot more often – Taleb changed a lot of people’s way of thinking in a lot of places.
This was a fun read, but does the topic have to be more complicated than:
“Sometimes evidence for or against a scientific theory is conclusive. But often it is ambiguous or inconclusive. When you have to formulate a theory based on such evidence, you rely on subjective intuitions as well as objective evidence. People’s intuitions will naturally lead them in different directions, but the more conclusive a piece of evidence is, the fewer people will espouse theories that contradict it (in the extreme case, there are very few such people, and we call them “mentally ill”). In many cases, the evidence for a theory accumulates in bits and pieces, so over time the theory gets gradually more convincing. One might expect the number of supporters of the theory to grow gradually in response. But in practice, one’s intuitions are heavily influenced by their surroundings, education, experiences, and selfish desires. So it can take a generation of theorists with similar intuitions to die off; then the next generation, who grew up without that particular set of intuitions, can adopt the new theory immediately.”
Yes, it is more complicated. I think the key added layer of complexity is that there is a huge amount of interaction and reinforcement between the theory and the priors.
In your model, everyone has priors, and they have evidence, and then they combine all of their priors with all of their evidence to predict the world.
In the actual discussion, the way it works is ‘everyone has priors’, they observe one piece of evidence, they adjust their priors based on the evidence, they observe the next piece of evidence, and so on.
There’s actually a huge difference in how social knowledge/consensus evolves depending on which of the above ways of learning happen.
I think you’re missing the way that theory interacts with field; if evidence-gathering is mostly hypothesis-driven, evidence which exists at right angles to the paradigm is less likely to be collected or recognized until (unless?) a nascent paradigm coalesces around it. Evidence without paradigm is a naked singularity.
Hate to be that guy but Oedipus is Antigone’s father, not her brother. But I guess we all get confused by that family tree
Maybe I’m missing something but the bit about Antigone in the post above, and in the email, is correct. Creon decreed that Antigone couldn’t bury her brother, Polynices.
Hate to be that guy but Oedipus is Antigone’s father, not her brother.
He’s both. Antigone, Ismene, and the twin brothers Eteocles and Polyneices are the children of Oedipus and Jocasta, and Jocasta is the mother of Oedipus by Laius. So Oedipus is both father and brother of his children, and Jocasta is both mother and grandmother of them. (And you thought your family get-togethers were rowdy!)
Antigone’s dilemma is not just one of “which side of my family do I obey?”, it’s a wider one where ‘the personal is the political’ and vice versa. Creon is not alone her uncle but the ruler of Thebes now that Oedipus has left and Eteocles and Polyneices have killed each other, and he bans the burial of Polyneices as a political decision: the king punishing a traitor who aggressively attacked Thebes with a foreign-raised army. If you’re an American who thinks of Benedict Arnold as a traitor, congratulations, you know why Creon decided that way. And if you would instinctively find it strange to give any honours to Arnold, congratulations, you now know why this was such a big deal for Antigone to defy her uncle (who was also her king) and his orders as king. And if you’ve ever heard E.M. Forster’s “If I had to choose between betraying my country and betraying my friend, I hope I should have the guts to betray my country”, you know Antigone’s side of the story. This is the crux of the argument: do you betray your country, or your family? What if they’re conjoined, where your family represents and is supposed to safeguard your country? What if they’re both supposed to mean as much and have as much claims on blood ties, the city/motherland or fatherland being the metaphysical parent and you the child?
In this way, Creon puts public duty (as the king) above his familial duty (as uncle to the dead man). And part of the argument for that punishment rests on the fact that Polyneices was expected to respect the city and the people and the rule because of his blood ties as a citizen and a member of the royal family; he has not merely trespassed against civil law, he has transgressed familial bonds (as a son of Thebes the motherland).
Antigone’s defiance of Creon is not merely choosing family above the polis, it’s arguing that there are fundamental values and obligations that are binding even more than the construction of political identity: the dead must and should be buried because it is not just something we owe to them from personal attachment, it is something necessary for them to achieve the proper translation to the afterlife and so part of the proper functioning of both society and nature. She is championing the superiority of divine/universal law over human/local law as represented by Creon’s order. It’s a tragedy of interlocking values that both support and oppose one another, and their shattering destroys everyone around.
In the US, we’ve squared that particular circle by passively-aggressively honoring Arnold, winner of crucial victories in the Revolution, without naming Arnold, vile British quisling. There’s the Boot Monument to “the most brilliant soldier of the Continental army”, the nameless plaque for the “major general born 1740” at West Point, and the empty niche at the victory monument in Saratoga (the other three niches each have statues of one of the other three US commanders in the battle).
So does anybody know how to trace the angles of a stair?
My guess is to put the 2×12 at the angle you want the stairs, have them stick out the door, and draw the intersection between the door and the wood, cut that line, put the cut part against the floor and repeat the process. The second time around it won’t be the same angle if you didn’t choose a 45 angle to begin with.
If you lay the 2×12 such that the right edge is resting on the corner of the second floor, and the bottom right corner of the 2×12 is where the bottom left corner of the 2×12 (when finally installed) should be, your 2×12 is now parallel to its correct location and the angles are simply horizontal for the bottom cut and vertical for the top one; you can use a level and/or plumb bob to mark them. If you measure the length of the 2×12 from the bottom right corner (where it touches the floor) to the point where it touches the corner of the second floor, that length will be the correct length (along the top-left edge of the board) of the distance from the horizontal cut to the vertical cut.
I believe this works mathematically but there may be solid practical reasons it doesn’t; a bit of searching shows most guides tell you to do the math.
Here’s a 30 second mspaint version of the above.
That suggests a way to do it without measuring. You take your square and draw a line across the 2×12 from the second floor corner, perpendicular to the long edge of the 2×12. This corresponds to the upper short edge of the orange piece in your diagram. Then your vertical cut line comes down from the top end of that line.
This is my take, but I wouldn’t bother with parallels.
Just turning it around and drawing the second edge the same way the first one.
Great review. You could save time in an introductory philosophy of science course by assigning this to read instead of the book, because as you point out a lot of it is already in the water.
Kuhn gets overinterpreted a lot by people who like to push various species of relativism. As I see it, such overinterpretaton results from taking conclusions that only apply cleanly in the limit case and generalizing them to the whole domain. In this view the parts of a paradigm are all precisely dependent on each other for meaning to such an extent that if a paradigm is only somewhat different from another it is completely different and therefore not comparable at all and the distance between them is not meaningfully traversable. Paradigms are internally integrated and coherent, and insulated from each other. You have to pick one because it’s impossible to mix them, and outside of a particular paradigm a concept means nothing at all. In or out.
Real science isn’t like this, and therefore conclusions that follow from this don’t necessarily apply. Kuhn uses examples that suggest it, but as many have said since then he kind of cherry picks and generalizing the pattern and using it to draw far-reaching and radical conclusions of science as a whole is, well, an overinterpretation.
In real life concepts are both a bit vague and meaningfully more-or-less different (instead of just “the same” or “different”, full stop) in a way that makes it possible and in fact common to compare paradigms and pieces of paradigms (pieces that can be moved around without losing all of their meaning). This is because what we have are typically paradigm-like structures that overlap partially and are at least somewhat reconcilable. This is pretty true in the physical sciences and very true in the social sciences.
The ideas in TSOSR are valuable not because they describe science perfectly but because they work as a corrective to the prevailing view at the time. It’s one pole, and adding it to what we already had creates a new space (a spectrum where there used to be a point) which is great, but it’s important to remember that the new pole isn’t the whole space. To understand science you need both that side of the story and the fact-gathering/positivist/naive inductivist/whatever one. Generalizing only that facet gets you to the wrong place just as much as generalizing only the logical positivist side (or the falsificationist one if you want to get all multidimensional) does.
The best thing to take away is from this whole cluster of ideas is, imo, that in order to express something as a fact it needs to be put into words, and for that concepts need to be defined. That means that the truth of any fact relies on the validity of the surrounding conceptual system (or systems because you can’t exactly separate them into neat units) that gives the phrase its meaning. This is sometimes very important, like when you deal with highly abstracted entities that depend a lot on particular conceptual systems like “God”, “proletariat” or “gender”, and sometimes not so important (but still strictly speaking true) when you deal with concrete stuff like tomatoes, trees or chairs (this is pretty much what Sam Harris and Jordan Peterson have spent hours arguing about).
Disclaimer: Possibly confused braindump. I don’t know that much.
Nice review, I found your examples very insightful.
This is actually a deep and fascinating question that we don’t know the answer to. I feel that the existence of objective reality isn’t enough to explain it. We can learn and do science much better than no-free-lunch theorems suggest, which you might explain by a kind of anthropic observation bias, but also things like SAT-, TSP- and MIP-solvers work much better than the theory of NP-completeness suggests, to which anthropic arguments might not apply as much.
Alternatives include: 1. getting hit by a bus just happens sometimes, there is no way to avoid it. 2. The way to avoid getting hit by a bus is to repeat in whispers “I am one with the force and the force is with me”. 3. It is a common misconception that people can get hit by busses, but that has actually never happened. The thought is some kind of type-error, busses are not the type of things that can hit people: they are the kind of things that transport people from one stop to another.
Reading this post was fun, but also sort-of depressing, as it drives home to me that having read a book 30+ years ago, at least for me, isn’t all that much like having read a book more recently. Lots of that knowledge has decayed over time, leaving a vague Cliffs Notes version in my head.
I don’t read Kuhn — or even Feyerabend — as arguing that there is not a world that is a specific way, or that there is a world but it is not a specific way. I read them both as saying that “truth” is a property of the map, not the territory.
So no, no facts are objectively true — because there is no “objectively true” map that we could use to confirm these facts. Why not? Well, how could we possibly determine whether a map A is “objectively true”? We could compare it to some other map B — but to be able to compare them, we need some measure of “truthfulness” of the maps. So we develop some measure M, and we find that M(A) > M(B), and therefore A is more objectively true than B. Iterate this process across a lot of the most plausible maps and, while you may not ever be able to prove A is an objectively true, you can at least raise your confidence in the objectivity and truthfulness of A to an arbitrary degree.
But wait — what’s so special about M? Suppose I propose a different measure of “truthfulness” — call it N. And suppose that N(B) > N(A), so we have conflicting theories about whether A or B is more “objectively true”. Now we need a way of adjudicating whether M or N is a better measure of a theory’s “truthfulness”. So Alice proposes a measure for these measures, and finds X(M) > X(N). But Bob proposes Y instead, and Y(N) > Y(M).
As far as I can tell, there’s no way out of this infinite regress.
You can appeal to common sense to short-circuit this process instead of deriving measures of measures of measures of measures ad infinitum…and indeed there is a whole school of philosophy based on doing so. But this doesn’t really help, for a few reasons:
1. Different people often have different “common sense” interpretations of the same phenomena, and there is no clear way to adjudicate whose sense is the commonest without a similar infinite regress to the one above.
2. We’ve seen many times in the past that “common sense” interpretations turned out to be “false”, and unintuitive or counterintuitive interpretations turned out to be “true”. And Kuhn points to cases where what was once “obviously false” according to common sense have become “obviously true” according to common sense, suggesting a widespread change in what constitutes “common sense”.
So the determination of what constitute “objective facts” takes place in an interpretive framework, but we don’t have a good way to compare the quality of interpretive frameworks. Worse, if we try to evaluate our interpretive frameworks we’ll find our only vantage point is from within the same interpretive framework! This is similar to the joke about the fish saying “what the heck is water?”
Two relevant concepts:
1. Munchausen’s trilemma points out that knowledge is impossible. For any alleged fact, we can ask for a justification. But for any justification, we can ask for further justification. There are only three possible outcomes: a) infinite regress; b) circularity; c) axiomaticity. For any of these three branches of the trilemma, there is no objective justification for the fact that we started with.
2. Most approaches to knowledge use axioms, but Quinean holism takes the circularity branch instead, and proposes that empirical knowledge is neither top-down nor bottom-up but “middle-out”, with various propositions derived from empiricism supporting each other in a complex, networked way which can be fruitfully compared to the concept of tensegrity.
And again, none of this supposes that there is not a concrete physical reality independent of our mental models of it. Rather, it points out that we have no vantage point to view that concrete physical reality apart from our mental models — at best we can break out of one mental model to another mental model.
“Rather, it points out that we have no vantage point to view that concrete physical reality apart from our mental models — at best we can break out of one mental model to another mental model.”
This seems to me true but without consequences : yes we are always only changing our mental models but these models do seem to converge towards concrete physical reality.
I think this is an appeal to common sense, and I think that it ultimately fails. The best way to see this is in quantum theory.
Compare the Everettian interpretation of QM to the Copenhagen interpretation of QM. Both are fraught with conceptual inconsistencies and shortcomings, but few of those inconsistencies and shortcomings are shared between the two. Conceptually, they are mutually exclusive — if one is true, the other must be false. But they make essentially equivalent predictions about the physical universe, such that it is pretty much empirically impossible to distinguish between them.
Rather than allowing us to converge on a single description of a concrete physical reality, the progression from classical mechanics to quantum mechanics has allowed us to more effectively model the observations we’ve made at the expense of creating divergent mutually exclusive descriptions of concrete physical reality.
Classical mechanics gave us a clear and intuitive description of concrete physical reality; rather than refining this further, scientific progress has muddled our conception of concrete physical reality.
This is kind of a simplification, things have gotten a lot more complicated with the advent of the holographic principle and quantum field theory. This makes the notion that we are converging towards a single description of concrete physical reality more tenuous — the latest work in physics has led to a divergence of theories that are very difficult to distinguish empirically, but which are very different from each other conceptually.
The more we learn, the less clear it is what the correct description of concrete physical reality would be.
Why are maps useful?
With your Open Source example, i would say that the main paradigm it is working under is that ‘code quality’ is easy to measure accurately; sufficiently easy to measure accurately that we can compare code quality to outcomes to determine what else could be affecting outcomes. Or at least that is the school of thought even if it doesn’t qualify as a paradigm.
That being in steep contrast to what I would say is the paradigm or consensus among computer programmers that code quality is difficult to measure.
In your medicine instance, there are the assumptions that people are v.similar to each other and that people act in a predictable way to foreign objects. Sure the four-humours-guys agree with that, but you won’t expect every paradigm to disagree with every other in every circumstance.
Structure is kind of tragic because while Kuhn’s ideas are very good, he does a surprisingly bad job explaining them in the original book. I found his later essays much more helpful in understanding his meaning.
You asked, “is Aristotelian philosophy a science?” Kuhn came up with his ideas in struggling with the same question, and he would say that Aristotle’s worldview was a science. Take a look at this excerpt from “What Are Scientific Revolutions?”. In it, he describes how he came to understand how the terms that Aristotle used, while seeming familiar, really referred to entirely unfamiliar concepts.
It’s hard to summarize, but this is probably the strongest section:
I also found this section enlightening, where he shows how two different modern sciences, when discussing a fact, have different paradigms:
In your section II, Kuhn would call those fields proto-sciences. He seems to think that they operate under different rules, and that if there is a “cure” for their condition, he doesn’t know it, and maybe doesn’t think a cure exists:
I, for one, think you were totally right to link in predictive coding! And there is, perhaps, the main point. You can’t perceive the world directly, you need to perceive it through some series of approximations.
There is to me one single and obvious paradigm at play in your examples – Cipriani, Ioannidis, et al – Ceballos, Ehrlich, et al – Terrell et al – each of them require the use of and acceptance of modern statistical methods for measuring things or their effects. When was the word meta-analysis even coined? And when was the first meta-analysis performed?
Tangentially related: consider the latest editions of Algebra II books, the sections on conics are frequently abbreviated/removed and new sections on statistics and probability are included/expanded. The ACT and SAT now include matrices because of engineering. These changes in education indicate that our scientific society have discovered new paradigms that require facility in new tools for processing the information into meaningful answers.
Indeed. “Paradigm” is used in very slippery ways in the book, but of course one of its ordinary meanings is example, and one of the clearer ways Kuhn uses it is to represent examples of successful observation or investigation, which legitimize and encourage more experiments of similar design or use of similar equipment and techniques to investigate more things, building on the previous successful examples. Statistical techniques are a perfectly good example of this, and can be applied to leeches as well as to modern drugs; the difference between a thousand years ago and now isn’t just that they had leeches rather than our drugs, but that they had anecdotes rather than our modern statistical techniques.
I believe that Margaret Masterman claimed to have identified 21 different senses in which Kuhn uses the word “paradigm” in Structure.
It seems to me that there is a different between what is simply an improvement in technology and what is a paradigm shift.
The example of Protagoras’ with contrasting them to anecdotes seems not to be a difference in the assumptions or worldview or priorities or whatever, but simply an improvement in the task of looking at multiple incidents to derive data about the world.
Only if the old class would reject this knowledge if it were taught it would it be evidence of a paradigm shift, to my mind.
The old class would reject or at the very least not understand what is going on when we do statistics. Take, for example, MacIntyre’s critiques of social science. He doesn’t believe that or understand how social science can tell you anything about human nature or society. He calls social “science ‘discoveries'” somewhat useful highly contingent stories which offer no predictive power.
I have heard other rejections of statistics too from philosophical schools too.
For example: statistics tells you about the group you study, but there is no reason to believe that the generalization will hold true because of the number of unknown unknowns in human affairs/society.
Statistics can speak to large groups but since it cannot tell us anything about the individual patient before us it does not provide knowledge of the individual.
These critiques all have good answers but to understand the answer you have to understand the math and assumptions behind statistical tools.
Personally, I found The Structure os Scientific Revolutions to be a somewhat cryptic book – you can get the general idea from it, but the specifics are often too poorly discussed and the language more succint than would be recomended for the topic.
I think a following book, The Road since Structure, does a better job in clarifying some of the concepts and answering to his more vocal critics.
Some comments from what I remember:
“I think Kuhn’s answer is that facts cannot be paradigm-independent.”
Mostly, it’s that the the “fact” itself is barely describable without a theory. We don’t see the Earth orbiting the sun, we see the sun in different points of the sky as time progresses. Revolution around the Sun is already a theoretical explanation.
His theory is kind of agnostic on whether we can find “objectively true” statements – as the very definition of what’s objective requires a theory!
Notice that the paper is working within the paradigm of modern statistical analysis, which could be employed to analyze results produced by more than one medical paradigm. The studies about leeches and depression could be analyzed the same way.
From what I understand, both are paradigms, and both are needed to make the paper possible. One of the key sources of misunderstanding (Kuhn himself admitted it) is that the word “paradigm” can be interpreted in more restrict or more generous ways through the text; Kuhn has latter pended to the more generous interpretation, IIRC.
Yes they do. Complex chemicals will break apart at high temperatures.
As an obvious example, explosives that detonate will change chemical structure without reacting with anything else.
What explosive compound doesn’t react with some other compound in the process of exploding?
Most high explosives. TNT for example is a complex molecule that decomposes into a handful of small ones if it receives a sharp shock, without reacting with anything other than itself.
This isn’t the same thing as the distinction between detonation and deflagration — a lot of compounds that ordinarily deflagrate will happily detonate under the right conditions, which is one of the reasons why there’s ethanol in your gas — but one of the easier ways to get something that reliably detonates is to come up with a molecule that hates its nitro groups and will throw them away at the drop of a hat. No reaction with another molecule necessary.
“my knowledge of Coulomb begins and ends with “one amp per second””
FWIW, that’s bass-ackwards.
An Ampere is one Coulomb per second; a Coulomb is about 6e18 electrons. That’s rather a lot, enough to ionize about a percent of a percent of a mole of a substance with a single unit of charge per molecule.
For things like measuring total amp-hour capacity of a battery based on the amount of reactants, you might want to use more than one significant figure, or you might not care since the total capacity and the useful capacity are probably different- a car battery that won’t turn over might as well be completely dead for most purposes.
It’s actually precisely forwards. You might conceptually consider the coulomb to be the prior concept, but the definitions have always been the other way: the definition is that “the ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one meter apart in vacuum, would produce between these conductors a force equal to 2×10^−7 newtons per meter of length.” and the coulomb is 1 A × 1 s. That’s why in the magnetism equation, the permeability of free space is exactly 4π×10^−7, while the coulomb takes the arbitrary value of 6.24150934×10^18 fundamental charges.
Of course, the upcoming SI redefinition will reverse this, but historically, the value of the coulomb is derived from the value of the ampere.
One amp-second is seconds^2 times an amp per second.
It’s not about ‘prior concept’, it’s about the fact that a Coulomb is an amount of electrical charge, not the rate of change of current per unit time.
But it is, and we can show it to you. It just needs a better telescope than Copernicus had.
See also spontaneous generation and Pasteur’s fancy glass bottle, liminiferous aether and Michelson’s interferometer, and many others. There is commonly a single experiment that disproves the old paradigm, or pushes it into the realm of too many arbitrary unsubstantiated epicycles to take seriously. It’s just that doing experiments with the required precision is really, really hard. Not coincidentally, paradigm changes usually occur when it is barely possible to support them by stringing together multiple experiments at the bare edge of the available instruments’ precision and so subject to e.g. confirmation bias, and each addressing only part of the paradigm, rather than going unnoticed until someone realizes that the telescope they’ve been using to spy on their neighbors happens to be precise enough to make parallax measurement easy and unambiguous.
The practical result of which is a process that plays out more or less as Kuhn describes. But philosophically, and Kuhn seems to be all about the philosophy, I think there is a world of difference between “truth is subjective and unknowable” and “optical alignment is a bitch”.
It is now, but it wasn’t back then. Scott’s/Khun’s point is that there’s nothing intrinsically guaranteeing that, for some particular scientist, the “true” (or “true-er”) paradigm matches observations better. That the modern physical paradigm doesn’t contain (many) glaring flaws, given equally modern tools, is an empirical fact (and a matter of luck.)
More broadly, what guarantee do we have that the “truth” is knowable? We see old past paradigms that we’ve escaped from, through flaws and inconveniences that we could observe within that paradigm; but sort of anthropically, we’re guaranteed that they were escapable only because we had to escape them to see them as past paradigms. We may presently be in a paradigm without such an escape route- that doesn’t lead to aggravating epicycles, or doesn’t produce obvious anomalies, insofar as we can determine thinking within said paradigm- but which still doesn’t describe the full nature of reality. It might not describe much of reality at all, even; arbitrarily many interesting things may be as incompatible with this paradigm as black holes to a that of a medieval astrologer.
Which is also sort of an answer to Kuhn’s question “What must the world be like in order that man may know it?” – namely, that there’s a substantial unwarranted assumption there.
No, it’s not a matter of luck, and you seem to have missed my point. Science will not contain, at any time, flaws that are “glaring” to the tools of that time. This is not luck, this is scientists using the best tools they have to determine objective truth. Also from that premise, the flaws that are marginally visible to the tools of the time, if you look carefully and from several directions, will be the flaws that are in the midst of being swept away by the latest paradigm shift. The paradigm shifts because the truth-finding tools and techniques get better.
We could observe them only at the tail end of the old paradigm’s reign, and it is because we acquired the ability to observe them that the old paradigm fell.
Conflating this with “…meh, maybe the truth is unknowable, let’s just believe what we want and not imagine ourselves better-informed than the flat-earthers”, is a profound misunderstanding of how science works and I hope it’s not what Kuhn was getting at.
The parallax example is an instance of the dominant theory having glaring flaws; from something like 1750, when the Copernican theory became “dominant,” to 1806, when the parallax was finally observed, this key expected observable was absent. A Renaissance astronomer could have been forgiven for thinking that parallax was such an obvious implication of the Copernican model, and the post-hoc adjustments necessary to rescue the idea so extreme*, that this flaw was a knockout blow to the whole paradigm. How much more glaring can you get?
That Giuseppe Calandrelli was eventually able to observe the parallax and fix this was, indeed, a credit to hardworking scientists doing the best they could with limited resources- but it also relied upon those resources existing, and it is that latter part that is luck. We could have been living deep in a nebula, on a super-Earth too massive to allow practical spaceflight. Or we could have been sapient dolphins, and been unable to build advanced telescopes or any similar kind of technology. Or we could have just been really dumb, and not been able to figure it out. In any of those cases, the parallax would have never been observed.
Likewise, dolphin-Copernicus or dumb-Copernicus or super-Earth-dwelling Copernicus may not have had access to the kinds of observations necessary to motivate the heliocentric model in the first place; in those worlds, the geocentric paradigm would be inescapable. The possibility of other, more advanced paradigms that are inescapable to us in the same sense (because it would require some technological capabilities we don’t have access to, or because we’re not a sufficiently intelligent species, or whatever) seems like an obvious inference.
For instance, the Standard Model seems to work really, really well, and for the most part (aside from some inscrutable oddities like dark energy) there’s no sign of a “deeper” theory. We hoped that the LHC would produce clues to lead us to the next paradigm, but no dice. No one can say for sure that the next particle physics experiment will have better results. What if post-Standard Model physics are only observable if you have an accelerator the size of the solar system? Or the size of the observable universe? Or larger? We’d be stuck with the Standard Model forever. But our limitations in accelerator-building don’t guarantee that there isn’t another, even more exotic particle, which would revolutionize our understanding of the whole system… if it wasn’t utterly impossible to find.
There are midpoints between “everything in the universe can be understood” and “we may as well believe whatever we want.” Obviously, our knowledge gives substantial constraints on the content of any possible future (impossible-to-reach but “valid”) paradigms; we can observe satellites and take a tour around the Earth, so Flat-Earther paradigms are out, forever. But on the other hand, the universe gives us no promise that, for any aspect of reality, there’s a path for us puny humans to find out about it.
* “The stars are how many miles away? And we can still see them, at that distance? For that to work, you literally hypothesize the existence of millions of extra suns! And you thought those Ptolemaic epicycles were too much?”
But absence of parallax isn’t a “glaring flaw” in Copernican theory; it’s only the combination of immeasurably small parallax and large apparent diameter of the fixed stars that is a glaring flaw. A finite diameter implies a finite distance, particularly with the reasonable assumption that stars are the same class of object as the Sun, and the stellar diameters measured by 16th and 17th-century observers corresponded to distances incompatible with the parallax measurements of those observers.
This discrepancy could be resolved by better parallax measurements, or by better measurements of stellar diameter. And in fact, it was in 1720 that Halley used stellar occultation to show that the observed disks were optical anomalies and stellar angular diameter was immeasurably small – thus stars were immeasurably distant and could have immeasurably small parallax.
As you note, it was not long after this (but see also James Bradley and aberration) that the Tychonic model was finally done away with and the Heliocentric model became dominant.
Certainly we are not in that paradigm at the present: the current paradigm in physics has anomalies that we attempt to explain away with epicycles (dark matter and dark energy), and there is still that pesky fundamental incompatibility between general relativity and quantum mechanics.
Kuhn disagrees with your characterization of the reason why “anomalies” are tolerated. Kuhn argues that anomalies are ignored within a paradigm even when they unambiguously challenge the paradigm’s foundations and have solid experimental evidence to support them. The UV catastrophe is potentially an example, but I’d have to go back to Kuhn and some of the commentary to find the others they cite (I vaguely recall one that was discovered by a schoolteacher in a classroom demo and then ignored for many years).
So Kuhn would likely say that paradigms are discarded more for sociological reasons than because a rational weighing of anomalies versus confirming experiments favored a new paradigm. I think he makes a fairly compelling empirical case for that model as an accurate description of actual scientific practice (see e.g. the saying that science progresses one funeral at a time). And recall that Kuhn was writing his theory in direct contrast to Popper’s falsification theory, which held that the sine qua non of science was the ruthlessly rational challenging of existing theories with experimental or other empirical evidence. At least as a descriptive account, I think Kuhn is closer to the mark, if only because of all the cognitive and social biases at play.
It’s natural to find Kuhn’s metaphysics unclear since he was completely unclear about his metaphysics in Structure, and he spent much of the remainder of his career attempting to get clearer on it. Here’s one of the last things he wrote about this:
Now, you may wonder how you can possibly make something clearer by saying it is a form of Kantianism, and as a non-Kant-scholar, I understand the feeling. But here’s my best stab at what’s going on here.
The most distinctive feature of Kant’s metaphysics is that he claims that a large number of things that are ordinarily claimed to be features of mind-independent reality — that is, of the world as it is in itself as opposed to how it is as represented by minds — are actually features of how our minds must represent the world. This includes both the obvious things, like color, and some really surprising things, like causality and the nature of space and time. So things in themselves do not enter into causal relations or exist in space and time, but they still exist and ultimately ground the nature of the world as it appears to us.
Kant’s view is not relativistic because (1) he thinks that the particular facts that are part of the world of appearance are (non-causally) determined by the nature of mind independent things (the “Ding an sich” mentioned above), and (2) he thinks that all minds impose the same kind of structure on the world (e.g., causal and with space and time).
Kuhn’s proposal is to reject the second claim. Instead of minds all imposing the same type of structure on the world, Kuhn suggests that changing paradigms can impose their respective structures on the world. There is still a mind-indpendent reality that in some way determines how things appear to us and also constrains how successful a given paradigm can be. But all the things that differ between paradigms concern only the features of our representation of reality. Mind-independent reality does not contain any of the relevant properties and so does not settle things one way or another, except insofar as it somehow renders one paradigm more useful than another at solving particular puzzles.
Anyway, I don’t find this view particularly appealing, but it’s the most coherent thing I’ve managed to get out of Kuhn.
This description is extremely helpful–thank you!
Good comment. 🙂
Focusing on the phrase “being-in-the-world,” which has some affinity to Heidegger and Wittgenstein I think clears up something which could make the Kuhnian view you describe less unappealing, especially if you like meditation, alla Sam Harris.
When I think of the “being-in-the-world,” I think of Wittgenstein’s point in PI that language is an activity of life. Even proposition making is an activity, and it like all other activities is neither true nor false. If you accept that Newtonian physics is an activity and not a set of propositions that exist ‘out there somewhere’ but instead exist as a way to describe (our experience of the) world, then you accept that Newtonian physics is not TRUE nor FALSE in the sense of actually BEING REALITY, but then again, reality also isn’t true or false… it just is, and if reality is truly one, then true and false are best thought of as statements about the implementation of tools within paradigms.
Today, and especially on this forum, we 1st worlders adopt the mode of life of living and thinking (at least compared to prior generations much more) ‘propositionally.’ It’s a good way of life, allows for special types of reasoning, interesting insights into experience, and an amazing variety of puzzles and details about out experience become clear when we start using these awesome tools we invented called statistics and bayesian reasoning and cognitive science instead of stories like “medicine isn’t a liberal art and thus not for intellectuals” (that’s a Roman thought, BTW) or “Trash is overtaking the earth!” We prefer these tools because even though they are difficult to use well, require years of training, and sometimes make us feel hopelessly confused, they also can increase crop yield, life expectancy, understand history better, create stabler political systems, and create cool podcasts.
Which systems do you have in mind? Those contemporary political systems that differ form ancient ones have relatively short histories, so I don’t see how we can know that they are more stable.
England’s 350 years of peaceful power transfers, I suppose.
Governments are better managed today than they were 200 years ago. Organizational science, I think, is an improvement on Aristotle’s Politics.
I haven’t read Kuhn, but I was thinking Kant would be the right way to interpret this as well. However I’d say that Kant’s structure is much more along the lines of predictive coding in perception than it is about scientific theory. That is to say I think you can easily have a Kantian Kuhnian, who believes in a shared structure of basic perception/cognition (the Kantian transcendentals) which is used to support incommensurate theories about the world (the Kuhnian paradigms).
This is probably the obvious question to ask, but if things-in-themselves don’t cause sense impressions, then what is the relationship between the two supposed to be? I understand that that’s exactly what we’re not supposed to speculate about since any answer we can think of would be constrained by our cognitive architecture in the same way causality is, but it seems like Kant is sneaking causality in through the back door.
Somewhere, if I recall correctly, Kant apologizes for pluralizing things-in-themselves. His point is that these are, by definition, inconceivable, and number is a way of conceptualizing them. The relationship between things-in-themselves and our impressions is equally unknowable, and assuming one is like faith in God, which for Kant is equally beyond the rational.
I don’t agree with any of the above, but that’s my best understanding of what he’s saying.
I started but didn’t finish Structure. My impression was that Kuhn offered a great description of science as actually practiced — as moves within the social game defined by a particular paradigm, analogous to Wittgenstein’s conception of language. This accorded with my observations of my own field, and gave me a new perspective on the history of science.
But it seemed to me that his notion of incommensurability (at least in its strong form) was mistaken. Whereas I can imagine such a state prevailing between alien beings (though I don’t think it’s likely), I doubt that human theories are ever really incommensurable (rather than just suffering from long inferential distances). And the fact that science (as practiced by humans) takes the form of a succession of particular social games is not really fundamental to the enterprise (an idealized scientific process), but is rather the consequence of human psychology and group dynamics.
A more complicated example might be statistics where there are different schools of thought existing at the same time. I liked this quote from Andrew Gelman:
This motivation can just easily move science backwards. The Nazis decided they needed to develop “German Physics” to counter the horrible terrible no-good very-bad “Jewish Physics” created by Einstein.
What if we analogize paradigms to languages. Different paradigms have different vocabularies. The Aristotelian paradigm has key terms like ‘final cause’ and ‘natural order’ that don’t really translate into other paradigms. Terms like ‘entropy’ or ‘evolution’ don’t really translate well into Aristotelianism.
Different paradigms even have different grammars. Part of why the Aristotelian paradigm seems really weird is that its propositions have a different structure. “Carbon monoxide has a toxic nature” only has 2 terms, and the structure is “A has property B.” “Carbon monoxide binds to hemoglobin, preventing oxygen delivery in mammals.” has 4 terms. “A does B, with effects C in circumstance D.” It’s really hard to translate between these two structures.
You can’t just translate term by term and expect the results to make sense. It’s like using an early computer translator between languages.
Sometimes it seems impossible to translate a single fact and poor translations lead to apparent contradictions. It seems that people have to learn the new paradigm, like a new language, for basic propositions to be coherent and integrate with other knowledge.
Also, we can’t have facts independent of paradigms any more than we can have sentences independent of languages.
Kuhn develops this very idea in The Road since Structure.
That does not surprise me. It’s a pretty obvious analogy.
And yet, as we know from programming languages, what’s expressible (in principle) in one sufficiently powerful language is expressible (in principle, not necessarily with any convenience) in all of them.
Samzdat did a fantastic job at steelmanning Kuhn and presenting the developed interpretation of his views.
I think Scott is on to something with top-down bottom up prediction, which is a theory about our minds linking upwards into to Kuhn’s ideas about how paradigms undergird our interpretation of the world. The difference between paradigms and processing is that the paradigm is shared by a community, but processing is individual.
Scott’s description of Kuhn resonates with me.
I have, for a while now, been of the belief that there is no truth as far as we are concerned, but that there are claims that are “true enough”. There might be electrons that experience electromotive force and move in certain ways and cause bits of rock to emit light in certain frequencies, or electrons might be total bunk and I’m getting into a bad habit by trying to think about what they’re doing, but in the mean time, I can use something I call my fingers to press what I call a keyboard and view something I call SSC, and it’s true enough for me to think of electrons as helping make that happen.
It might even be bunk for me to insist on there being claims and sensations and the act of naming things. Maybe I’m just a simulation in the matrix. But again, it’s all true enough.
I also recognize that this belief of mine is unfalsifiable. I think it follows naturally from logic; that’s about it. It doesn’t allow me to invent a space elevator or break an encryption key. And interestingly, it presumes the truth of the claim that logic and observation can help us infer truth, but that is about as axiomatic a statement as I can accept. Everything I believe seems to follow from that. (Is this logical positivism? If so, is it really so bad, given the caveat I hang on it?)
…Then again, it does permit me to question nearly everything else, while prioritizing such questioning in a way most people would probably recognize as “sane”. I don’t sit around questioning my ability to fly or whether food is actually a drug conspiracy perpetrated by Monsanto and starvation is actually just withdrawal and if I could power past that I’d never have to eat again. But I do question whether we need immigration reform or whether recycling improves the environment, and I suspect this is healthy and I wish more people did it.
I’d never heard of Kuhn until today. (Or Fleck, for that matter.)
I think you may be confusing existence and understanding here. Electrons are observable, so a class of objects (metaphysical usage of the word here…) that can be called electrons is fine. The paradigms by which we understand and define these (still metaphysical usage) objects is however the thing that may or may not be right.
If it is wrong (and this seems pretty unlikely considering what we can do with it) then it is possible the observed class of (yes, still metaphysical) objects we call electrons could be redefined, and even divided in light of the new paradigm. But the observable class of electrons would still be definable even if recognised as wrong.
To illustrate this, let’s use the obvious comparison to electrons, whales. The class of (often very real) objects known as whales may have varied over time, since dolphins and porpoises, narwhales and probably others have not always been in it, and may not be in many people’s categorisation of whales. But the significance of whales here is that they can be grouped into higher level object categories. On biological grounds we tend to class whales as mammals; other paradigms tend to put them as fish on the reasonable basis that they look fish-like and live in water. The change in the way whales are understood does not mean the observable category of whales does not exist, simply that the paradigm in which it is viewed can change. Furthermore, although I guess everyone reading (and certainly everyone writing) this is of the whales are mammals paradigm(s) we are all capable of understanding the earlier/alternative categorisation that whales are fish existed; we can therefore all understand that the category fish has been larger ad it used to contain whales (and to contain whales you need to be large… Sorry). So if our observable electrons were redefined by a new paradigm into either invisible gremlins or very small dragons, we’d still be able to understand that previously all these invisible gremlins and very small dragons were defined as electrons. The observable invisible gremlins and very small dragons also have an identity as electrons; its just in light of our modern paradigms we know this in inaccurate, just as fish including whales is no longer the paradigm we use. The whale and the electrons are both still observable even if understand them differently.
I’ll say all three studies are part of the applied stats pararadigm.
Statistics-as-nowadays-used-in-practice rests on lots of dubious assumptions. Relationships between variables are usually assumed as linear or something reducible to linear, basically because that simplifies the math. Distributions of independent variables are more or less assumed, if we are lucky by overextending the CLT or sometimes just by tradition. With some ritual cleaning it is allowed to aggregate lots of vaguely related data into “scales” or “indices” giving a single number and then to treat those like actual measurements of actual quantities. Some ceremonial profession is made about correlation not being causation, but absent a known common cause correlation is taken as baaaasically establishing some causation with ceremonial asterisks. Complex interactions are assumed away, because we need to assume some (almost certainly wrong) model to estimate its parameters. And that’s all while the mathematicians are still watching, when they aren’t “X explains Y% of the variance of Z” is interpreted as “Z is Y% caused by X”, which Aristotle would correctly have identified as not even wrong but completely meaningless drivel. These are basically lots of interrelated social conventions on what is recognized as “empirical” proof.
That paradigm is mostly better than the previous convention of just boldly stating a theory based on intuition and anecdotes. Also, for most of the fields presently studied that way, nobody has a better paradigm to offer, certainly not me either.
That said, the paradigm is showing its inadequacies and people are experimenting with epicycles. For now the replication crisis gets blamed on p-hacking and that’s certainly a large part of it. But realistically not all of it. For a while many people hoped the problems might be magicked away with Bayesian methods. Occasionally a blogger might notice that controlling for variables doesn’t seem to work. It is socially semi-acceptable to deride some stats-based sciences as soft and being quantitative and empirical is no longer the absolute defense it was for a while. On other occasions people suspend their suspension of skepticism because some conventionally empirical result is conflicting with theories they trust more.
So basically there is a paradigm here. It is becoming more visible as it cracks, but we can’t have a paradigm shift yet, because nobody has a more fruitful pattern to offer.
Also Gilbert’s 2015 post shows statistics both has paradigms and is itself a big paradigm.
That post is excellent and I hadn’t seen it before, thanks for linking it.
Do you have thoughts on Judea Pearl’s stuff?
Also, I feel like some machine learning methods could constitute an alternative paradigm, especially one where predictive ability is prized over interpretability.
This is the essence of the Norvig-Chomsky debate.
Actually, William James didn’t just coin the phrase ‘blooming, buzzing confusion’ in his 1890 Principles textbook, he also described predictive coding in perception. Check out the section headed PERCEPTION IS OF DEFINITE AND PROBABLE THINGS:
Wild question (this is what I do): could someone, in theory, assume the Earth is flat and then come up with a mathematical physics for the Universe at least as good as the Newtonian one that worked, however freaking complicated? I don’t expect binary answers but p values as to whether it’s possible.
Is Occams’s Razor a guide to truth or convenience? I’m a skeptic regarding humans’ ability to comprehend objective truth. Why should we? A chimp can’t understand objective truths of the universe. Their brains aren’t capable of it. So we have bigger brains than chimps. We can understand much more in our own logic language, but isn’t it hubris to think “Chimps can’t understand the universe, but we, as smarter monkeys can.”? Seems likely to me we can only put truth into our smarter monkey brains as best it fits, just like chimps do. But it’s absurd to think we have the brains to understand Objective Truth when we know chimps can’t and we are just bigger monkeys.
I often think logic itself is something we dreamed up with our big monkey brains. When I dream at night, there’s a dream-logic that connects events which makes total sense at the time, but then I wake & realize my dream-logic, however sophisticated it seemed, was nonsensical. What’s to say our waking-logic of cause and effect as big brained monkeys wouldn’t seem every bit as nonsensical to a much higher intelligence, were there one?
Scott suggests that the moral problems faced by Antigone wouldn’t even be addressed by modern philosophies. In the same spirit, perhaps in a million years our notions of physics and even logic itself will be as easily dismissed. Perhaps our math won’t be viewed as entirely wrong per our limited perceptions and cognitive capacities but as utterly irrelevant monkey-logic.
The obvious difference between humans and the other great apes is that we at least understand the question “is there an objective truth?”. Comprehension might be questionable, not least because objective truth is very hard to disestablish from personal belief (see any debate about climate change to see this in action), but we can define the category ‘objective truth’ at least. It’s probably an excess of postmodernism in my education but I’m not convinced there is a real objective truth anyway (hence my not capitalising the name) that can fill the category, so it might be pretty hard to comprehend what it is if it is simply the result of each individual human (or chimpanzee, for all that he or she can’t define the category) reflecting upon their own paradigms.
I’m not at all convinced we understand the question “Is there an objective truth?”
At the very least, I’m pretty certain I don’t, and not for want of thinking about it.
It has certainly been my experience that the nature of objectivity becomes less clear the more closely it is examined.
Have you ever asked a chimp if they understand that question or not?
Edit: Seriously, you can perform a simple magic trick for a chimp, or even a dog or a cat, and they’ll be amazed and confused. Then reveal that you were just hiding, or the treat was in your other hand or whatever and they’ll visibly go “oh, I was just mistaken, reality didn’t just do some insane backflip”. That indicates some sort of distinction between what’s real and what appears to be real, I think.
Logged in to say this is a great comment. I hadn’t even considered that dogs and cats might be able to “notice confusion” and become relieved by understanding (even though a cat’s trademark attribute is curiosity!), even though my pets have definitely exhibited that kind of behavior.
It suggests that (obvious though it may seem when said explicitly) an animal’s desire to understand is not necessarily correlated to its actual ability to understand, and that the neural structures that cause this are far older than the primate family (otherwise cats and dogs wouldn’t have them).
Easily. But only at the cost of redefining the concepts ‘flat’ or ‘Earth’ such that they no longer correspond to the everyday meanings of those words.
It seems to me that our system of objective truth does a good job of predicting the future in certain limited contexts with a high degree of accuracy. Does that make a difference?
Actually, chimps do too, albeit in a more limited context. If a chimp drops a banana, does the chip anticipate that the banana will hit the ground? I would imagine so. The chimp can’t of course describe that interaction as precisely as we can, but the chimp definitely understands objective reality in that sense.
I think for further investigation, you might be interested in the work of Nancy Cartwright, a contemporary philosopher of science.
Aristotelian metaphysics is far from dead. In fact, concepts like essence, final cause and teleology are bashfully creeping back into science under different names. The detour from the 17th century to the 20th was certainly worth it. It was worth bracketing final cause for a while and seeing what’s amenable to reductive mathematical analysis in terms of efficient causes. But it turns out all that was just low-hanging fruit, and that approach has its limits too.
Care to elaborate? I’m pretty critical of teleology, but do my best to familiarize myself with the best arguments in its favor.
I don’t think this is Cartwright’s example (or position), but final causes never really even went away (we just talked about them differently).
The Aristotelians (and Aristotle) didn’t use final causes to mean intentions (like, rocks have inner lives that are excited by the idea of the center), just that there was a natural end state for them (being in the center) and they would reach it unless something outside them prevented them from doing that (like other rocks, or people throwing them around, or whatever).
Drawing a line between explanations that use that sort of claim and explanations that appeal to, say, the second law of thermodynamics is tricky to say the least.
Except that the second law of thermodynamics simply says: the total entropy, measured globally, of a closed system will tend to increase, precisely because, by definition, there are more ways for it to do so than not.
Things don’t “gravitate” to increasing entropy. There are just more ways for entropy to increase than decrease, so on average, it increases.
Note that many interpretations of Second Law-style fluctuation theorems for open systems actually come to opposed conclusions: an open system which predicts, can in fact act as an “entropy pump” that concentrates negentropy locally as a method of increasing entropy globally.
Eli beat me to it, but to echo them: the second law is a situation where the “end state” is a product of the application of some surprisingly simple mathematical models applied rigorously*. This is not an example of a system where indeterminate mechanisms lead to predetermined outcomes. Do you have any better examples?
* Depending on your preferred thermodynamic model, you can actually derive 2nd law behavior from entirely different premises. Independently, it can be distilled to either “combinatoric growth is really fast” or “there’s no such thing as the opposite of random”.
I’m thinking particularly of the idea of “dispositional” properties (Brian Ellis and Alexander Bird are two people who’ve written books discussing the use of dispositional properties in science). And biology is of course saturated with intentional talk – the idea is supposed to be that it’s to be cashed-out later in terms of lower-level lego, but if people like Ellis are right, it turns out the lower-level lego seems to require some element of dispositionality too.
The other way to look at it is that intentionality in humans, reflective intentionality, is just a special case of a more general natural phenomenon – as MH says here, it’s not that things in nature have minds and intentionality in the way we do, but that things’ behaviours and states necessarily relate to things not-them in one way or another, as end-states, as goals, as “free-floating rationales.”
All of which (including Cartwright’s observations) means that the older 17th century (basically Humean) empiricist idea that science is in the business of “observing regularities” can’t be right, there’s something deeper and more interesting going on – things have natures and behave certain ways necessarily. It’s not that they happen to behave that particular way (out of all the logically possible ways they could behave), and we’re just observing and describing that accidental pattern of behaviour; it’s that being the kind of thing they are, they necessarily behave that way, IOW logic is not as instrinsically detachable from things as we thought, it’s part of the fabric of nature itself, like the older classical philosophy thought.
One way I try and keep a marker in my mind for this is to think of the regularity of the sun rising. From the Humean perspective, there’s no “guarantee” that the sun will rise tomorrow, we just happen to have observed it doing so up till now – so in effect, that means there can’t be any such thing as inductive logic. If that’s the case, the only way around it is either Popper’s way (there’s only deductive logic, but it’s used via modus tollens, and we sculpt away falsehood), or a return to the classical way: there’s no question of the sun not rising tomorrow if it’s really the sun, because what we’re positing by calling it “the sun” and thinking of it as the sun, is just the kind of thing that rises in the mornings (unless there’s some intervening condition). The room for error would then be that maybe it’s not in fact the sun (as we think of it, as the kind of thing that rises), rather it’s something that looks like the sun but is something else, or a “grue” sun, so to speak, or some alien trick, or something we don’t understand yet.
In a way though, that’s pretty much the same as the Popperian position: we posit possible entities, deduce what’s likely to happen if they’re that kind of thing, the kind of thing we think they are, and if eventualities pan out as we expected, then they’re that thing. The only real difference is in accepting the fact that there can be entities that behave consistently, laws that describe natures and essences.
In order to do this, first of all you have to assume that there is such a thing as depression, that it is a well defined disease that can be diagnosed and treated the same way you treat physical diseases with leeches/drugs, which in turn implies a fundamentally mechanistic view of the mind (you might still hold a formal belief in belief in souls, but for all practical purposes you are thinking of a broken mind as you think of a broken leg). These assumptions are all core tenets of the paradigm of biological psychiatry. They may look obvious to you specifically because you are a psychiatrist and more generally because you grew up in a culture where they are commonly considered part of the mainstream scientific world view, but to a Hippocratic physician embedded in a culture of Aristotelian philosophy, they wouldn’t be obvious at all.
Melancholia, right down to the biological symptoms, has a long and (surprisingly)detailed medical history. From what I can remember of Aristotle at least it was seen as a condition resulting from a biological predisposition to excessive black bile, which is pretty much the exact same kind of starting point.
I read Kuhn many years ago and came away with the belief that you can’t think too hard about what objective reality is without confusing yourself, and in all the intervening years I haven’t really found an adequate solution to this mild problem. And I say mild because I can perfectly well pretend that what I think is objectively real is in fact so and it doesn’t change how I behave.
I’m quite frankly more interested in the mysterious technologically advanced paradigm-busting artifacts that Scott references. They all seem like frauds to me, if only because they are isolated examples that do not support one another and instead contradict massive amounts of other competent evidence. Does anyone know if any of them have any reasonable likelihood of being real?
Of course, the sophisticated viewers of the play knew the correct scientific-materialistic explanation: opium is made of round particles, and round things are soothing.
I think part of the problem Scott was having is that there are a few different things that all get lumped together under the umbrella of “science”. Kuhn’s description of paradigms sounds more or less correct to me as a physicist, but I don’t know if it makes as much sense for things like meta analysis of the effectiveness of prescription drugs. I suspect that if Kuhn says that social science may not even be science, he might feel the same way about meta analysis. However, if the purpose of science is to be able to make useful predictions about the world, then there are several different pieces that are needed.
Scientists acknowledge that there’s some divide between experimental and theoretical work, in that if you optimize a skillset for one, you’re not necessarily optimized to do the other (and more specialization is required now than probably at any point during the past). But even within experiment and theory, there are subdivisions that profoundly affect the way each of those things is practiced.
For example, I’m a theorist, and a lot of my work has been developing perturbative techniques in something called density functional theory. Density functional theory is a principle that says that you should be able to do quantum mechanics using the electron density instead of the wave function. This sort of sounds like a new paradigm, except that the rigorous proofs of density functional theory only show that there should be some (probably absurdly complex) isomorphism between the electron density and the Hamiltonian, and since you can calculate the energy from the Hamiltonian, there’s a mapping from the electron density to energy. This sounds good, except, we already know the Hamiltonian, so all we’ve done is add an extra step. And so in practice, what people mean when they say density functional theory is an approximation to quantum mechanics where we replace the true electron-electron interaction with some functional of the electron density.
So, I know that sounded like I went off on a huge tangent, but my point is that you have quantum mechanics, which represents a paradigm shift, and then once you have quantum mechanics, you have theorists that try to pick apart the equations to see how they work and what they imply, which doesn’t create a paradigm shift. Then because the equations are so complicated, you have theorists that try to find approximations to them that can be solved and that still give pretty good answers. Then, a level below that, you have the kind of work that I’ve done a lot of, which is to say, “the mathematics of these approximations is actually quite a bit different than the mathematics of the real thing, so we need to pick apart these equations and see what they imply, rather than just assuming that everything works the same way as the real thing”. And then you also have people who actually apply the equations to run simulations, which I’ve done quite a bit of as well, and which is also a totally different thing.
Maybe it’s more apparent to me because I work somewhere where the lab’s ultimate goal is to provide useful technologies to the navy, but I think the purpose of science is to be able to use the behavior of the natural world to our advantage. Maybe this wasn’t always the case, when people were debating why the planets move along the paths they’re on, but science and technology are closely intertwined. And to be able to develop technology, you need a model that has good predictive power, so that when you want a device to do a specific thing, you can say “here are the principles that we need to exploit to make this function happen”.
Scientific models should not only be able to predict a few new phenomena that maybe we’ll observe at some point in the future, but they also need to predict the behavior of things so well that we can reliable exploit the principles of the model to build things. The process of doing science requires collecting data to build the models and deriving models from that data, but then it also involves testing the things we build from them. Because this type of testing also falls under the umbrella of science, statistics, data analysis, and data collection have also been pulled under science’s umbrella, which then naturally extends to cover things like social science.
Related, and a project I work on: SciDash: Programmatic validation and visualization of scientific models (especially neuron models)
I think part of the reason you see a difference in how Kuhn is viewed between philosophers and scientists is his tendency to do something has a long history of really annoying philosophers (and is part of why post-modernism never got popular in philosophy in the way it did in a bunch of other disciplines). And it shows up in the bits where he talks about philosophical consequences rather than describing things in science.
It has a lot of names (motte-and-bailey, two-step, etc.), but its basically this:
Person 1: “… therefore everything is relative, there is no truth or falsity!”
Person 2: “Wait hold on WHAT? Look – ‘I have socks on’. That’s true.”
Person 1: “No no, come on. I didn’t mean that you can’t say things that are true or false, or that sentences can’t be true or false. I meant that there is no Capital-T-Truth, no one specific theory which encapsulates all possible knowledge and explains everything in every way.”
Person 2: “Oh, ok yeah that’s obvious.”
Person 1 (looks around to make sure Person 2 has wandered off): “… And since there is no truth or falsity there is no reason to prefer one statement over another and claims about one being true or false are all expressions of power and ideology!”
Economics is not physics, and Dierdre McCloskey said a number of things that would be considered Kuhnian in her classic Journal of Economic Literature article, “The Rhetoric of Economics” (1983). E.g., she has something to say about the First, Second, and Third in the post. I found the original article more focused than the later book.
If you ever wanna do research instead of clinical work, give me an email, and maybe we could find something for a psychiatrist to do in our predictive coding of emotions lab!
Has anyone engaged David Deutsch on the subject of Kuhn? He has the best critique of Kuhn I’ve yet seen in Ch 13 of his The Fabric of Reality, which I won’t retail here not so much because it is too involved, but because in various ways the whole book leads up to it. I despair of synopsis, therefore the general inquiry.
I find Deutsch exasperating (my fault, not his) because even when I’m pretty sure he’s wrong, I usually can’t marshal a good counter-argument. So I seek help on this wherever I turn.
Full disclosure, I buy the Many-Worlds “interpretation” of quantum mechanics, just as I buy the dinosaur “interpretation” of fossils. I’m no huge fan of Yudkowski, but I agree on this. Sure there are problems, so let’s address them instead of ignoring/stonewalling the issue. I know a bunch of physicists, including a couple of regulars at CERN, who are willing to discuss physics with me (especially after a few drinks). If only in a isn’t-that-cute-a-guy-without-a-physics-PhD-wants-to-talk-physics kind of way. Anything from dark matter to muon tomography. But raise Many-Worlds or any challenge to the Copenhagen Interpretation and it all shuts down, can’t get anything out of them but straight down the line Bohrian Complementarity/shut-up-and-calculate.
But even though I generally am persuaded by Deutsch re physics, I have difficulty accepting a lot of his philosophical positions, especially his radical optimism. Input would be welcome, and a bite-sized place to start would be Deutsch’s critique of Kuhn. I realize that I am assuming (and then some) that someone among SSC’s commentariat has read Deutsch’s book(s) and is willing to post something about it, but I take crazy chances the way.
Scott, looking both ways is actually part of a massive paradigm shift – the notion that vehicles are the dominant and prioritized occupants of the road.
I would just like to say thanks for linking to John Salvatier’s blog post (Reality has a surprising amount of detail). As an engineer by training and trade, that boiling water example hits home for me. I often feel like the point he is trying to make is not well enough appreciated by many people, so I’m glad to see it get some exposure here.
Edit: Reading a bit further, it seems like this post is touching upon some of the same themes, if not the same particular message.