Yesterday’s review of Surfing Uncertainty mentioned how predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy.
This was a really difficult concept for me to understand at first. But there were a couple of passages that helped me make an important connection. See if you start thinking the same thing I’m thinking:
To make [bodily] action come about, the motor plant behaves (Friston, Daunizeau, et al, 2010) in ways that cancel out proprioceptive prediction errors. This works because the proprioceptive prediction errors signal the difference between how the bodily plant is currently disposed and how it would be disposed were the desired actions being performed. Proprioceptive prediction error will yield (moment-by-moment) the projected proprioceptive inputs. In this way, predictions of the unfolding proprioceptive patterns that would be associated with the performance of some action actually bring that action about. This kind of scenario is neatly captured by Hawkins and Blakeslee (2004), who write that: “As strange as it sounds, when your own behavior is involved, your predictions not only precede sensation, they determine sensation.”
PP thus implements the distinctive circular dynamics described by Cisek and Kalaska using a famous quote from the American pragmatist John Dewey. Dewey rejects the ‘passive’ model of stimuli evoking responses in favour of an active and circular model in which ‘the motor response determines the stimulus, just as truly as sensory stimulus determines movement’
Still not getting it? What about:
According to active inference, the agent moves body and sensors in ways that amount to actively seeking out the sensory consequences that their brains expect.
This is the model from Will Powers’ Behavior: The Control Of Perception.
Clark knows this. A few pages after all these quotes, he writes:
One signature of this kind of grip-based non-reconstructive dance is that it suggests a potent reversal of our ordinary way of thinking about the relations between perception and action. Instead of seeing perception as the control of action, it becomes fruitful to think of action as the control of perception [Powers 1973, Powers et al, 2011].
But I feel like this connection should be given more weight. Powers’ perceptual control theory presages predictive processing theory in a lot of ways. In particular, both share the idea of cogntitive “layers”, which act at various levels (light-intensity-detection vs. edge-detection vs. object-detection, or movements vs. positions-in-space vs. specific-muscle-actions vs. specific-muscle-fiber-tensions). Upper layers decide what stimuli they want lower levels to be perceiving, and lower layers arrange themselves in the way that produce those stimuli. PCT talks about “set points” for cybernetic systems, and PP talks about “predictions”, but they both seem to be groping at the same thing.
I was least convinced by the part of PCT which represented the uppermost layers of the brain as control systems controlling various quantities like “love” or “communism”, and which sometimes seemed to veer into self-parody. PP offers an alternative by describing those layers as making predictions (sometimes “active predictions” of the sort that guide behavior) and trying to minimize predictive error. This allows lower level systems to “control for” deviation from a specific plan, rather than just monitoring the amount of some scalar quantity.
My review of Behavior: The Control Of Perception ended by saying:
It does seem like there’s something going on where my decision to drive activates a lot of carefully-trained subsystems that handle the rest of it automatically, and that there’s probably some neural correlate to it. But I don’t know whether control systems are the right way to think about this… I think maybe there are some obvious parallels, maybe even parallels that bear fruit in empirical results, in lower level systems like motor control. Once you get to high-level systems like communism or social desirability, I’m not sure we’re doing much better than [strained control-related metaphors].
I think my instincts were right. PCT is a good model, but what’s good about it is that it approximates PP. It approximates PP best at the lower levels, and so is most useful there; its thoughts on the higher levels remain useful but start to diverge and so become less profound.
The Greek atomists like Epicurus have been totally superseded by modern atomic theory, but they still get a sort of “how did they do that?” award for using vague intuition and good instincts to cook up a scientific theory that couldn’t be proven or universally accepted until centuries later. If PP proves right, then Will Powers and PCT deserve a place in the pantheon besides them. There’s something kind of wasteful about this – we can’t properly acknowledge the cutting-edgeness of their contribution until it’s obsolete – but at the very least we can look through their other work and see if they’ve got even more smart ideas that might be ahead of their time.
(Along with his atomic theory, Epicurus gathered a bunch of philosophers and mathematicians into a small cult around him, who lived together in co-ed group houses preaching atheism and materialism and – as per the rumors – having orgies. If we’d just agreed he was right about everything from the start, we wouldn’t have had to laboriously reinvent his whole system.)
Something about this framing is bothering me. One thing is that error-minimization and surprisal-minimization should look basically the same, and so it’s not obvious that framing one as approximating the other is better than framing them as isomorphic to each other. (Whether or not they’re actually isomorphic depends on the math going on underneath, but I think both frameworks could be consistent with any expected experimental results and major aspects of each seem natural and consistent with each other.)
Another thing is that control systems seem better motivated than prediction systems. In the PCT framing, there’s the neural hierarchical control framework, and then another sub-neural control system that modifies the neural hierarchy that isn’t well understood. (This seems to be related to stuff like calorie acquisition and sexual satisfaction and so on, such that those are inherently rewarding in some way.) I don’t think the surprisal-minimization framework has an explanation for eating food being rewarding, but the controls framework can.
And, in general, when you think of people as agents, agents acting to control their perceptions seems like a better framing than agents predicting what they’ll do and then doing that (especially if they predict that they’ll act to control their perceptions!). In particular, think about Bayes scores, which are canonically measured as the log of predictions of events, which are always negative; an agent that just seeks to minimize surprisal is better off receiving no sensory data by being dead! An agent that seeks to maintain homeostatic balance observes that being dead is not being homeostatically balanced.
While I started out writing this comment because that framing rubbed me the wrong way, I think I should just state my overall position: I view PP and PCT as basically isomorphic, with PP how you would look at things from a unsupervised learning frame and PCT how you would look at things from a supervised learning frame. But one of the obvious things to do in a supervised learning agent in a big world is for it to also have unsupervised learning capabilities, and encourage using them at the right levels through something like curiosity, boredom, and overstimulation. Given the deep mathematical similarities, that means a synthesis of the two frames is very simple and seems desirable.
(But it also seems to me like PCT is the foundational frame; at their heart, it looks like people want homeostatic balance, not perfect prediction, and predictions are just extremely useful tools at reaching homeostatic balance in a big world.)
As a question: couldn’t things such as curiosity and boredom serve as supervisions, if you take as default the desires for knowledge and activity respectively? Many learning frames could thus be described as unsupervised within their own limited context (e.g. the process of learning to bring a spoon to your mouth is internally governed by avoiding the surprise of boiling soup ending up where it shouldn’t be) but ultimately organized by a fairly standard and straightforward set of principles, with the key one being an acquisition of new and more powerful predictive models. Otherwise, there would seem to be a slight disconnect between the unsupervised frame and the supervised one, where (as you so accurately point out) the proper goal for the unsupervised frame is to simply avoid all new data whatsoever.
(In fact, this might be explanatory for people who seem to avoid any kind of “unsupervised learning” and have very little curiosity: for whatever reason, their knowledge drives are low in the same way as some people have low sex drives, and thus they seek to shut out all potential disruptive stimulus on those fronts by avoiding anything that might make them think.)
I work as a controls engineer, and while I all I know about PCT and PP at the moment is what I’ve read in these 3 posts, I was thinking that I would implement them exactly the same way. So yes, based on what’s here I’d agree they’re isomorphic. Maybe Scott didn’t agree with the things that Powers thought were being controlled for at a higher level? That’s incidental to the theory IMO – the framework looks the same.
Every large scale dynamic electronic machine (airplane, petroleum plant, etc) is controlled by a hierarchical system of observer-predictor-controller loops. These pass information between layers not with the low level information they operate on, but an “error signal” which is the difference between their measured output and predicted output. You can stack as many of these up as you want, and get a coherent self-correcting output at every level from an abstract concept like “fly me to New York”. Or you oscillate wildly when something in your model is broken. Add in stuff like adaptive control and you can start to see a pretty nice formalized model of things we see in biology (I’d love to read more on people who have applied this to biological processes – the different neurochemicals being used as different pathways here is super interesting. Flood of dopamine in the brain = all error signals go to 0 = everything is right in the world?). PP and PCT both seem to be doing this. It’s a way of approaching our thought process that makes a lot of sense to me.
If it’s control systems all the way up, this way of looking at cognition implies a single measure of goodness that is being maximized at the highest level. Perhaps an evolutionary survive & thrive imperative that is going on above the levels we are conscious at? It fits pretty well with ideas about executive functions. But now I’m getting out of my area of expertise.
I was wrinkling my nose a bit at this because I can’t really see a meaningful difference between “your arm moves because your brain predicts your arm will have moved” and “your brain tells your arm ‘move'” but okay, I can see how this works in some instances e.g. reaching out for a mug on the desk without really looking because you have expectations of how far away it is, what direction to reach in, how to take hold of it and so on without constantly consciously going “right a bit, up, now left a bit, bit more, down”. Your brain makes a prediction, your arm maneouvres to fit the predicted range of motion. This is what we’re doing when we’re babies: we’re flailing limbs around and learning “woah, okay, doing that means this” so we start off with a garbled mess of sense data that we start working on to reduce to the nice predictive top-down modelling according as we go through the developmental stages.
I think (a) this is a decent theory of how the brain works but I don’t think it touches how the mind works and if it thinks it does, that will be a mistake (b) it’s a decent theory but it’s not the General Theory Of Brains and shouldn’t fall so in love with itself that it ignores anything contradictory.
Also, are “hyperpriors” or “stuff what is hardwired in so we have something to work with when we first start interpreting sense data” the same as good old-fashioned “instincts”?
Most of my exposure to good old-fashioned instincts was in the stimulus-response behaviorist paradigm. Touch a baby’s cheek and it turns toward it because it’s executing a program that helps it nurse. (The etymology seems to back up that interpretation.)
But what instinct is often used to point at is “inborn”–for some reason, it’s easier to make people afraid of snakes than to make them afraid of rabbits. Likely this is some genetic mechanism that has set up the fear system such that it is encouraged to learn snake-like things. There’s a hyperprior interpretation of that, a hardwiring interpretation of that, and so on; the main place they differ is in what specifically the inborn mechanism is able to do.
I think here is missing something :
Deiseach : Your brain makes a prediction, your arm maneouvres to fit the predicted range of motion.
Bobi : We can say maybe that brain make a prediction if that is meant with references, but arm doesn’t maneouver so that it fits the predicted range of motion. Perception does. Perception of arm is “maneouvering” to fit the predicted “perception” (reference). You don’t know what your arm is doing until is perceived. “Error” is the result of perceived arm position and references.
Only if you have a single top-level loop; I think the typical PCT view is that you have a bunch of “terminal goals” that are top-level neural loops. (Oftentimes there’s a non-neural control system applying some corrective pressure to them.)
“An agent that just seeks to minimize surprisal is better off receiving no sensory data by being dead! An agent that seeks to maintain homeostatic balance observes that being dead is not being homeostatically balanced.”
If you having Surfing Uncertainty or want to borrow it from me, this is addressed in Section 8.10, but it’s hard to understand and not very convincing. It seems to be one part repeating the word “embodiment” like a mantra, one part saying “Yeah, okay, that’s because of something other than predictive processing, we didn’t say this model explained everything, and one part basically Powers’ solution – saying that we have hard-coded “predictions” of getting enough to eat or being the right temperature or whatever.
Interestingly, Clark’s description of what a hypothetical predictive agent that really tried to minimize surprisal without these constraints would do is sit in a dark room without eating or socializing or doing anything. I’m struck by the relevance to depression here, though maybe it’s a coincidence.
Sounds good, I’ll email you about borrowing it.
The quantity referred to in the book as “prediction error” is better known as “variational free-energy”. If you go look at the definition here, you’ll see that
Divergence = Free-energy - Surprisal. We can of course re-arrange that to
Free-energy = Divergence + Surprisal. Divergence (KL divergence) is always at least zero, so by minimizing the “prediction error” (the free-energy), we’re first minimizing the divergence (of an approximating distribution from the true posterior distribution — or of map from territory). After we’ve minimized away that divergence (if we ever really manage to hit zero divergence, which we usually won’t), we’ve then got
Free-energy = Surprisal.
So ordinary variational inference tightens an upper bound on prediction error (free-energy), which is what Clark told us. Of course, even once you’ve got prediction error down to equal the surprisal, the surprisal (negative log marginal likelihood) for what you’re seeing could still be pretty high. What can you do then to minimize the prediction error?
You can change what you’re seeing, which is action.
There are two questions: (1) Is it true? (2) Which theory predicts it?
Actually, there is a prior question: (0) What does it even mean?
I’ve encountered lots of PCT enthusiasts that love that slogan, but none of them have convinced me that it means something useful. Many of them say that we don’t have veridical access to the real world, which isn’t exactly news. Others say that it’s just a restatement of control theory. I really don’t see that, but it seems like a pretty good reason to ignore the slogan.
Scott claimed that it just meant that behaviorism is bad and that people today shouldn’t care about the slogan. If he has changed his mind and decided that it means something useful, probably he isn’t giving PCT credit because he didn’t understand the slogan when he was reading PCT. So he should try to start over and evaluate things on his current understanding, not his beliefs cached over a period of varying understanding.
But what do you believe? Are you enthusiastic about the slogan? You didn’t seem to be in on LW. If you aren’t enthusiastic about the slogan, you probably don’t mean the same thing as Scott. And even if you are, you might not mean the same thing. And thus you can’t judge whether PP or PCT better justifies his interpretation of the slogan.
While I reject most interpretations of the slogan, here is something concrete: Hypnotists tell us
Assuming that is true, which theory predicts it? Does this distinguish PP from PCT?
Imagine my brain as a bunch of levers and gauges. It means there’s a linkage between the gauges and the levers such that the levers are used to keep the gauges in a particular configuration. (This is the “basically a restatement of control theory” view.)
So what does that mean concretely? Currently, I feel thirsty and have an empty glass on my desk. So I spring into action!
Now that I’ve refilled my glass and drunk some water, I no longer feel thirsty and there’s no longer an impetus to action.
You’re right that I’m not enthusiastic about the slogan as a slogan; I think that hierarchical controls seems like a much better descriptive model of human motivation and goals than something like utility theory, or surprisal-minimization theory. (I like the “hierarchical controls” label more than the slogan because I think it’s more descriptive to the sort of people I talk to.)
I think this is a high-probability event under both PP and PCT, so it can’t do much distinguishing. My sense is it’s a slight edge for PP (maybe .99 for PP and .98 for PCT, or something).
(I also think this is true, in part because I’ve done it to myself with the related swinging watch example.)
Could you walk me through how control theory leads to this example?
I could try, but I’m worried that my general audience explanations won’t serve as Douglas Knight explanations, or we’ll have double illusion of transparency, or so on. This might be easier to do via email (I’m @gmail.com), and I expect to do better if I have an idea of what counts as a walkthrough / not a walkthrough.
Have you encountered the book that created that slogan? Or Powers’ other writings? What he wrote is the best place to start if you want to know what he meant.
Why would I care what Powers meant by the slogan? If no one else who read the book got something useful out of the slogan, why would I? Sure, you claim to be impressed by the slogan, but so what? Your being star-struck and incoherent is a strike against the book.
My comment is not about what Powers meant. I don’t care about distributing credit to individuals. My comment is about the arguments made by Scott and by Andy Clark. Which is why I emphasize that Scott means something different than Vaniver.
You asked, “What does it even mean?” Hence my reference to the original source. That is what will tell you what it means. That is why I referred it to you. What I am impressed by is the actual work, not the slogan. But you prefer to concentrate on the slogan, ignore the work, misinterpret the advice to inform yourself as a credit spat, and rant against targets pulled from your own imagination.
…agents acting to control their perceptions
Bobi : I don’t know where did you get this, but is falsificate of PCT. Actually the Title of the book is : Behavior : Control of perception. And it doesn’t mean that action are for controlling perception. From the PCT control loop (LCS III) its’ obvious that behavior is result of control of perception not “regulator” of perception. Behavior is produced by differecne between reference and perceptual signal if we are speaking about PCT. It’s true that PCT claims that all is perception. But what you are aware of beside perception ?
Vaniver : (But it also seems to me like PCT is the foundational frame; at their heart, it looks like people want homeostatic balance, not perfect prediction, and predictions are just extremely useful tools at reaching homeostatic balance in a big world.)
Bobi : Extremly good thinking about how any organism survive. I agree with every word.
Scott: have you read Immanuel Kant’s Critique of Pure Reason? I remember reading somewhere that you were in philosophy as your undergrad, so there’s a substantial chance of it, but on the off chance you haven’t yet had the opportunity, I highly recommend it. I (along with some of the commenters in the previous thread) think there’s a lot of overlap between it and PP – not total, but close in several key regards. Like Greek atomism, Kant’s work makes a bunch of claims purely based on sitting down and thinking about the problem which in turn appear to be entirely correct. In particular, his expression of the careful interplay between empirical appearances and concepts as constituting the cognition of objects (as featured in the Transcendental Deduction and elsewhere) is highly predictive of PP, although I’m still a little on the fence as to whether his Categories were jumping the gun.
If you’ve read it and think there’s no relation, then I’d be interested in hearing.
Yeah, see for example here, which I still don’t feel got a really good answer.
I can see the resemblance between this and Kant’s ideas, but I think in the end PP treats perception as a glorified Photoshop Sharpen Image filter, and Kant treats it as the source of time and space and math and order and everything else.
In particular, Kant’s whole point was to resolve certain philosophical questions that AFAICT PP doesn’t do anything to resolve. That makes me think maybe the resemblance is more of a coincidence.
Did not see that post! I’d like to try and answer as best I can.
Part one: Kant was talking in part about mathematical laws, but more critically about the whole project of empirical science. Empirical science is, at its core, this kind of up-down structuring that PP is working with: expectations and sense data interact to create meaningful results. The problem at the time was that the general philosophy was torn between Locke’s style of hyper-realism and idealism from folks like Berkeley and (although not wholeheartedly) Descartes. That is, from the one direction you had people who were claiming that sense-data gave the full and true story every time, and from the other, people who said only the expectations meant anything. Idealism was pretty well scuttled from the get-go, because nobody really has ever believed it, but Lockean realism had to wait for a refutation by Humean skepticism (e.g. through the problem of induction). The problem there was that Hume didn’t leave anything really great to go on after the fact, except for a fairly undeveloped notion of “natural law.” This is where Kant steps in.
Kant begins by making a pretty bold declaration: in order for empirical science to be possible, we need to have some kind of synthetic a priori cognition. I think the best translation into contemporary lingo is “hyperpriors,” although it goes without saying that there are some differences. The idea, at any rate, is that we need to have something plugged into the system from the start in order to get meaningful results, which (surprise surprise) is what machine-learning researchers have discovered, as ksvanhorn points out here. As another, yet even stronger version of the claim: Kant says that there is a valid category “thinking being,” and that since the category is valid, there are necessary characteristics universal to all thinking beings without which they would not be thinking beings. This is a very, very important idea, which Kant doesn’t make explicit nearly as well as he ought to, and which has a ton of implications for basically every mind-based field as a foundational belief.
But if we have the necessary information plugged in from the start, what good does the sense data do? Kant’s answer: there’s a basic level of sense-molding, which sets everything we experience within the context of space and time (I’d personally recommend a cautious and conditional expansion of this, but space and time in the experiential sense, not the sense denoted by contemporary science are as universal as you can get), and then everything else is treated by a kind of calculus of sense-data and expectations. The terms he uses, for reference, are “empirical appearance” and “concept,” respectively, although his use is more subtle and deserves a proper and serious reading rather than blind substitution. The result of this is the cognition of empirical objects, which are the building blocks of science: we declare that a tree is a tree and can be studied as a tree, for example.
So, as a preliminary, why is this basically the same style of work as PP? Simple: because Kant and PP both are trying to explain what goes on in human cognition and perception such that something like empirical science is the best possible method, over (say) Aristotelian examination of scientific syllogisms. The apparent difference is basically just due to the fact that after the immense success of empirical science, with sound philosophy providing theoretical justification for it, there’s been basically no argument anywhere in the kind of Western academia that PP grew out of as to whether empirical science is the way to go. For the creators of PP, and probably you and me besides, there’s just never been a question about it. However, I’d argue that right now we’re actually in a fairly similar place to where Kant was when he was writing: on the one hand, the Analytic tradition has been trying to push some fairly naive positions about sense-data and mathematics, and on the other hand, the Continental traditions have been trying to undermine those positions with the various sorts of conditioned knowledge and lived experience and what have you (I won’t try too hard to get terms right here, because no two Continentals use the same ones anyway). Getting lost somewhere in all this kerfuffle is the human mind, which doesn’t seem to have the kind of perfect grasp of reality required for all the bizarre Analytic thought experiments but also adheres to the world more strongly than a Continental would have it. In to the rescue comes PP, which (unbeknownst to it) is justifying empirical science all over again, as a means of being kind-of-right about the world.
But the bigger question, I think, is about things-in-themselves and where mathematics come from if not from the world, right? I understand why it seems so totally absurd that Kant’s saying that we’re just imposing mathematical laws on the world, when those mathematical laws we’re imposing are good enough to land little metal darts on Mars and then have them sing Happy Birthday. The problem is, basically, that the word you’re using for “world” is what Kant would call the “empirical world,” while what Kant says escapes mathematics is the “things-in-themselves.” Things-in-themselves form a critical category which is totally absent from most contemporary discourse, which is basically what’s behind the veil. Consider the old skeptical argument, whether from Descartes’ evil demons or the more pop-culture Matrix, that denies that we’re experiencing the real world altogether. Are we simply wrong about everything we know, if we’re living in such a world? No, Kant would say; we’re right about that entire empirical world. However, the transcendental reality beyond that escapes all our current knowledge, and so we’re right to say that we don’t know the ultimate conditions under which we experience everything. What we can do is examine our own nature as minds, though, and from that declare some necessary conditions of experience: for example, that we need to sense time in order to have ordered thoughts, and space to have any experience of an “external” world (i.e. a space outside ourselves for that world to be in). This is why mathematics and logic can carry over so well between humans, and even between humans and aliens (yes, Kant wrote about aliens from space, and I’m not being silly or facetious here). The math may not be what’s behind the curtain in the end, but because of what we can tell about the structure of mind itself (through the Categories, for Kant), we can know that math will apply to our empirical worlds as well as the empirical worlds for anything with a mind. (This is an incredibly strong claim, and I think he’s entirely correct, with some hesitation about his particular Categories.)
It is worth noting that Kant is very passionate about insisting that space and time are not features of things-in-themselves, or whatever lies beyond the veil. The justification for this, as I understand it, is that space and time as we know and experience them are features of mind, not of whatever it is that lies beyond the veil. In his book, it’s miscategorization to try and apply them to what there really is out there. I think he’d be more amenable to the suggestion that there might be something vaguely analogous out there, but that we really can’t know anything about it in its ultimate state. Oh, and the reason why he’s so insistent about that point is because a lot of the deductive arguments towards idealism in his time started out by declaring space or time to be inherent features of the world as it really is. At least in part, it’s a direct stab at Berkeley. Hope that makes it feel a bit more reasonable.
So, as an overall summary: I’d say that a properly fleshed-out PP would have to end up declaring perception as the source of math at the very least, and probably time and space along with it, or else fall straight into the good old-fashioned map-territory problem (seriously, math is the best example of an iterative predictive model that’s non-identical with what’s under the hood). PP doesn’t have much to say about Berkeley and Kant doesn’t have much to say about the hegemony of European science, but that’s probably more the anachronism than anything, because the same battle seems to be going on behind all the faff with different words. For your black box, ignoring the obvious detail that the aliens could have put a different equation in, the reason why you can understand the answer they would give and they can understand the answer you would give is that there’s the same fundamental structure of mind shared between you (and you’re experiencing the same empirical object). All that we need in order to both come to the conclusion that 2 + 2 = 4 is the same basic software vis a vis counting, not some property of what’s going on behind the scenes or a magical spark going from mind to mind.
That ended up being quite long! I would be happy to go over any part of this in more detail, and if you give me some time to dig up my Kant volume, I can even give you citations. It’s good stuff, I believe, although Kant made a serious mistake when he tried shifting over to ethics, which is why he ends up saying such weird things there. Hope it all made sense and wasn’t too much of a bore.
I really enjoyed reading this; thanks.
Has it? How so?
Well, naive analytic philosophers have been pushing naive positions, as is true for the naive members of any tradition. That analytic philosophy in general is particularly prone to being naive on these subjects is a canard circulated by our enemies, of course, but one that entirely too many people have been tricked into believing. I assume he’s thinking of Language, Truth, and Logic, or perhaps basing his judgment on what others have said about analytic philosophers rather than on what analytic philosophers have actually said themselves.
I don’t know about your open thread question, sorry, but if this is the ‘asking confused questions about Kant’ section can I ask mine:
One argument I often see is that the discovery of non-Euclidean geometries messed up Kant’s program for a priori geometry. I’ve never really understood that argument, though.
It is really surprising and interesting that the parallel postulate is independent and we can have different geometries. But I don’t see how this has changed my own inbuilt geometric intuition. Nothing about my perceptual intuition for parallel lines has changed in a way that makes them look more like they ever want to cross! And if I’m trying to reason about a non-Euclidean geometry and want to visualise something, I’m still going to do some sort of projection into Euclidean space, e.g. use some model of the hyperbolic plane.
I’m sure there’s some horrible German post-Kantian philosopher I should read, but I don’t know which one.
Yeah, the reason you don’t get that argument is that it doesn’t work. Kant definitely did the Kant thing and failed to anticipate that there could be some fairly radical expansions to prior fields of knowledge, but although a priori geometry by itself isn’t so much a thing, a priori Euclidean geometry is a thing, as well as a priori non-Euclidean geometry. Heck, you can even introduce non-Euclidean geometry as a weirder extension of normal geometry and just keep on tickin’, same as any other field of math. All of this allows for the same basic program for Kant, which is that at no point in the study of any kind of geometry do you absolutely have to stop, go outside, and measure things in order to prove your next point. You might not know what proofs are useful without going outside and noticing, say, that the earth is approximately ball-shaped, but completing them takes zero measuring and experimentation (as opposed to figuring out the gravitational constant). That’s all he needed in order for it to be properly a priori.
Thanks, that’s helpful as I could never make much sense of the objection. Agree that that geometry being non-Euclidean doesn’t stop it being a priori.
I don’t know, I think I wanted to claim something a bit stronger though. In your long comment above you’re careful to distinguish ‘space and time in the experiential sense, not the sense denoted by contemporary science’. I kind of want to say that that experiential sense just is Euclidean, and Kant was right!
(I tend to think about flat tangent spaces to curved manifolds in this weird semi-Kantian sense – that it’s my local experiential approximation, or something, but that reality might not agree with me on large scales.)
Your stronger claim seems at least partway reasonable to me, but I think it needs to be predicated with the whole flat-surface thing.
I mean, take it this way: geometry clearly rests on our intuition of space, seeing as we come up with the axioms as abstracts and imagine the necessary space for them in our heads. Euclidean geometry, then, is geometry which rests upon the two-dimensional intuition of space, which is useful for things like architecture and medium-range navigation. In that two-dimensional sense, it’s quite true that two parallel lines never cross. However, we also have a three-dimensional intuition of space. When we imagine things in that three-dimensional space, the category of “parallel” starts to fall apart a little. Take a sphere: it’s pretty easy to imagine two kinds of parallel lines on it, as longitudinal and latitudinal cuts. One type never crosses, while the other does. The question becomes, then, what we view as being most essential to our technical definition of parallel: the never-crossing, or the Euclidean expression of angles to a tangent. Mathematics has chosen the latter, and so we say that non-Euclidean geometry allows parallel lines to cross. However, none of this contradicts our intuition, which quite accurately traces different types of lines across curved surfaces.
So that’s where I’d leave it. Kant was correct to say that Euclidean geometry is basically the structure of our spatial intuitions, but didn’t properly account for there being spatial intuitions past the two-dimensional plane. That’s why I say: “Kant definitely did the Kant thing and failed to anticipate that there could be some fairly radical expansions to prior fields of knowledge” as opposed to “Kant was incorrect.” He wasn’t. He was right.
(I’d say we don’t have any intuitions past three dimensions, though, and that four-dimension-plus geometry does have to be analogized and related through three-dimension geometry, insofar as the intuition goes. That’s what I understand the move you’re making to be about, and I think it’s the correct move, but I’d just add in the third dimension before cutting it out. Perhaps that’s exactly what you meant, and I was just using a stricter definition of Euclidean geometry being synonymous with plane geometry while you were talking about things that don’t fit within our three-dimensional intuition – if that’s the case, then I think I agree with your point entirely.)
Some of the latest notions of nonlocality are certainly resonant with the idea that Euclidian geometry is at least an emergent phenomenon from the underlying universe-an-sich. Naturally I wouldn’t suggest that Kant had any inkling of that, even if I knew much at all about Kant beyond what I’ve read here. But it’s kind of a nice validation of his train of thought that it took him so far and no farther.
What does PP say at high levels that you can tell it diverges from PCT?
My impression, which might have been wrong, was something like PCT wanting higher levels to control a scalar quantity (like “amount of love”).
PP doesn’t really present higher level predictions as being motivational (though maybe it could), but if they were I feel like it would be more like “similarity to my ideal relationship”.
I don’t know much about either theory, but it seems to me that you might be strawmanning PCT.
If PCT does not try to control the “amount of love”, but it has some higher-level mechanism that sets a relationship goal and some lower-level (but still quite high level in the hierarchy) control mechanism that tries to maximize the similarity between your current relationship status and the set relationship goal, doesn’t it become isomorphic to PP?
Am I missing something?
I’m new here. Can you explain to me what PP is ?
The math for PP always you to take arbitrary descriptions of possible worlds, compare them to real-life sense-data, and get a score out the other side (the prediction error) telling you how far the description is from real life. The key word here is arbitrary: if you can predict the sense-data involved in, say, Full Communism or True Love, you can get a prediction-error score for how Full Communism-y your economy is, or how far your relationship is from True Love.
Since PP then posits that you can act to reduce those error scores, you would thus be acting to bring about Full Communism or True Love.
In conventional LW terms, PP says that any model or belief which pays rent can also guide action to make itself real. This also implies that models/beliefs like, “God wrote physics at the beginning of time” or “there exists a robustly metaphysical space of a priori facts we access via intuition” cannot guide action: anything unempirical is unpragmatic.
I’m unclear as to what the content of claiming a PP style perceptual control of action really amounts to.
I mean in what sense isn’t it true that any system which learns which control signals to provide in order to create a certain desired perceptual state can be redescribed in terms of minimizing predicted error? In other words given a control system whose goals are understood in terms of certain perceptual states, e.g., an automated vehicle trying to ensure it is between the lines and learning the correct control to apply to the wheels to achieve that, can’t we always just call the desired behavior (staying between the lines or correcting course by such and such amount) a prediction and thereby vindicate the idea that control is just another instance of minimizing perceptual predictive error?
Maybe there is something I’m missing but I’m worried that one could redescribe just about any system which learns which controls to apply by comparing the actual sensory inputs to some desired state could be so described. The fact that there seems to be considerable freedom in choosing what to call a prediction (it apparently encompass fairly abstract notions that can have pretty attenuated links with the actual sensory input…as it must if the theory is to handle even rudimentary actions) and little specification of a particular manner in which perceptual predictions are minimized adds to this worry. If so that would make this model trivially true but also not pretty vacuous.
Maybe it would help if someone could describe a plausible way this kind of learned control behavior could function that is ruled out by this theory.
Hey scott, one of the issues that bothers me about the whole topic is how bad people are about probability in general. How is having a really bad flood now mean that we shouldn’t expect a flood for the next 100 years? how is the system generating these (strange) abstract predictions?
This all feels very convenient. In this post, and the book review, you made a lot of claims about the predictive processing model, and how it neatly explains a bunch of different already-existing data.
Isn’t that a Science Deadly Sin? Like, anyone can look at existing data and say “Ahah, I know why this happens”, but only real scientists can say “This is will happen, this is impossible and will never happen”?
I feel like these posts need an “epistemic status” warning. Unless you’re seeing something I’m not seeing, and I’m just projecting my confusion on you.
But until you put forth some prediction, or some form of model that says “this is what can happen” without reusing existing observations… well, I won’t be convinced, and I don’t think anyone should be.
The prediction is just that all nearly-thinking things have to work this way, and that anything that works in a different way (e.g. classic computer structure) just doesn’t qualify as a thinking thing. Anything with an upstream-downstream flow of sense data and expectations and a set of motivating forces will get most of the way there by itself. Anything without those will never get close.
In addition, you can see Scott trying to apply this to his own work with the schizophrenia suggestions. The idea is, of course, that if upstream-downstream malfunction is what’s going on with most mental illnesses, you can start to predict with greater accuracy which drugs will work there, which currently isn’t going well at all. Scott, not being an active biomedical chemist, doesn’t have any particular predictions, but a dedicated research team could come up with some and then test them. (Related: the reason that current drug-effect predictions aren’t working well could be that existing models don’t actually explain or predict anything.)
So yeah, this does have the potential to be a just-so story, but it’s already branching out in the direction of useful forecasting. The theory does come before certain practices sometimes.
That last bit makes this sound like it is not an actual prediction about the world but rather just a delineation of terminology. How do you cash it out into an actual prediction?
Well, the category “thinking thing” has some other requirements that we expect to see. It’s not like it was just made up for the sake of PP. It’s a category that was basically made for humans alone, and is tenuously extended with some severe caveats to other animals (some people try it with plants; I don’t buy it). If something is able to act very much like an animal in how it learns and interacts with the world, then we can put it into the same category as them. If it doesn’t, we won’t put it in that same category. Intuition and standard use are what can keep this category grounded, instead of just being so loose as to say whatever we want.
For the record, I think that some of our robots might be at pillbug or earthworm levels of competence. That’s not saying a lot, but it is quite cool!
It’s pretty difficult to come up with a simple, elegant, internally-consistent model that fits all the existing facts well. And those kind of models tend to work well out of sample. In other words, you can actually have a lot of confidence before you test things based on new evidence.
(I think Eliezer wrote about this with respect to Einstein, especially his comment about how if the eclipse experiment didn’t confirm his results, he would “Be sorry for the good Lord; the theory is correct”.)
Of course, human beings are well-short of being perfect Bayesians, so you are absolutely correct that we must ultimately require our theories to make good out-of-sample predictions.
A lot of the papers applying predictive processing to psychiatry were novel to PP.
This model is very similar to HTM from Jeff Hawkins – Hierarchical Temporal Memory. The Hawkins model has a *lot* more detail though.
This is essentially how a lot of control systems in industrial processing, robotics, aviation, etc work. You have a set of sensors that measure the state of your system, and you have a set of actuators that affect the state of your system, and you use a feedback controller to drive the actuators until your system approaches some target state as measured by the sensors. Autopilot in an aircraft? Feedback controller that detects the aircraft’s attitude and moves the control surfaces to move the aircraft toward your desired attitude. Nuclear power plant? Feedback controller that moves the control rods in and out to match actual output power with desired output power. Feedback controllers are so exceptionally useful and powerful that quite often you don’t even any sort of open-loop or predictive controller at all.
I’ve read a theory that a human being is basically just a shitload of feedback controllers layered on top of each other. Not just proprioception, but all the way up to complex behaviors like “I am hungry -> seek food to reduce hunger”. No word on whether this extends all the way up to things like “I want to be stronger -> Spend the next six months in a gym.”
Actually, when I read this part I thought, “Sounds like the mental mechanism that underlies confirmation bias!”
To (briefly) recap my comment yesterday: I think it’s possible that this sensory-perception-prediction-behavioral-control system is *also* what we use for developing our epistemology and moral decision-making, and that maybe a lot of cognitive biases derive from this.
Like, it’s very useful when navigating the world or interpreting speech to “tune out” noise that conflicts with your pre-existing beliefs, because your priors about the concrete physical world or about likely speech patterns are pretty accurate relative to the noise that exists. But this ends up being miscalibrated when applied to determining the truth of complex abstract propositions.
Thus the machinery optimized to tune out conflicting evidence so you can walk or talk ends up inappropriately tuning out conflicting evidence to your political or religious beliefs.
Basically your brain has just one module that models the world and checks it against the evidence, and its calibrated for walking around, not determining the truth of philosophical propositions and then working out what to do as a result.
To expand a little on this:
Suppose you parked your car in your driveway last night. Then you walk outside, and look at where you parked it, and you don’t see your car. You feel surprised. So you update your world model — maybe you remember that you actually parked somewhere else, or you think it was stolen, or you think that your wife probably drove it to work today.
By contrast, if you looked at the car and, because of the change in the position of the sun, the pattern of shadows from the leaves on it has changed, and the reflection of light off it is different, your brain doesn’t throw an alarm. It just says, “There’s my car.”
That’s roughly how our brain works to update its world model.
By contrast, when we perceive (say) events on the news through the lens of our political views, the contradictory evidence is rarely as stark as car there vs. not there. It’s closer to the pattern of shadows and angle of the sun causes the color to be somewhat different. So our brain tunes that out.
And when people *do* change their minds about things, it’s usually because they encounter unusually stark evidence. Something analogous to the sort of contradiction you perceive when you’re wrong about something in the tangible physical world. Because that’s what it takes to get our brain’s attention.
This also suggests that advice along the lines of “imagine the predictions of your model in terms of vivid and tangible physical things” might be useful to overcoming these biases. Or maybe just “pay attention to changes in the pattern of shadows.”
I think that’s a really good theory, but personally, I would then ask: well, what’s the point of philosophical propositions that have no sensorimotor grounding to test?
Two other comments:
This matches up with lots of practical sports advice, like keep your eye on the ball, look it into your glove/hands, etc — basically amounting to visualizing the outcome you want with laser focus. It’s very telling that most standard sports advice is about controlling perceptions and focus, not about physical mechanical skills. (And the same is true about standard driving advice, like “look far ahead”, “look where you want the car to go”, etc.)
Also, there is an interesting parallel to the “self-help” ideology around visualizations or affirmations; like you visualize success, and then your brain works to bring it about.
(Also: good job guys! We solved psychology!)
It’s the same in music, both singing and playing instruments. You’re supposed to hear the tone you want to make in your head first. This makes it way easier to actually make that tone. I think it even works for conducting – think about the music you want to hear, rather than about the gestures you’re sbout to make – but I’m not a good enough conductor to be able to tell.
That sounds a lot like a standard negative feedback control loop, just operating at a higher level.
I don’t understand why you find the high level so implausible. It seems to me the part which is obviously true, and seeing the thing about the lower levels just made me go, “Ah! Of course the same thing would be happening at every level!”
At the high level, identifying as a good person, for example, means predicting that you will do the kinds of things good people do, and it has some actual tendency to make you do these kinds of things. Likewise, if you have a detailed plan of what you are going to do tomorrow, you would be diverted from that plan by a million things if you did not constantly check back on the plan; the plan is pretty much the direct cause of you carrying it out.
Great review, you’ve gotten me excited. I’ve been reading several papers that point towards the message of this book, but an authoritative textbook is a great way to structure your thought. Just a question: why are you so sure that neurotransmitters will
map nicely to human-relatable concepts? Sure, dopamine encode prediction error, but in a different region of the brain encode lactation (it’s an inhibitor of prolactin). It wouldn’t surprise me that the role of the neurotransmitter is highly dependent on fhe concrete pathway it’s acting upon.
While we’re calling things in advance, this physical movement part is obvious nonsense, most likely retrofitted to clap your hands and believe folk myths that are fashionable mantras in sports circles. (and to a lesser extent in broader society)
I’m mainly 100% sure of that because it’s obvious claptrap, but for some arguments:
If movement is caused solely by prophecy, why with practice can you do physical things without thinking about it, and indeed even without even being aware of it?
How does anyone ever get a movement right first time, without past experience from which to visualise? Like how do do you see your toddlers learning. Do they sit in deep meditation first, to get their visualisation right, then venture out from the cloister of their mind to test their spiritual powers, before returning to their enclave of 1 to refine through long meditation and contemplation, or do they throw their arms all about the place like an idiot, and gradually refine their control through experiment, feedback, and iteration. ..As if they are mechanically learning a human’s control set, rather than the skill of glorious vision, like
insert artist herethe great Andrew Hussie, creator of the webcomic problem sleuth.
There is a difference between anticipation and prediction. I could anticipate a paralysed arm moving really hard without being so stupid as to bet that it would happen. When engaged in physical competition, you anticipate things according (roughly) to likelyhood times relevance, not just raw likelyhood. What you expect to happen -what developments you would bet on occuring, (as opposed to what developments you would prepare for), is often practically useless as a seperate measure from anticipation.
And anticipation always goes with planned actions anyway, so it would be easy to confuse them. If I plan to get some milk from the fridge, I’m going to anticipate that, because duh, it’s what I’m about to fucking do, but also so I don’t suddenly wake up disoriented to my physical location when I get there.
Electrical impulses, and habits of judging and controlling them, are a totally complete and totally intuitive explanation. Bringing belief into it looks a lot like a superflous addition on top.
Lastly, it *would* have a certain religious appeal if you found an explanation for why a weird dogma floating around in your society was actually true. “Ah yes, so that explains why people so often repeat that thing which I could never understand before, I’m learning so much…”. How pleasant this feeling of satisfaction and reconciliation is, even vicariously..
If movement is caused solely by prophecy, why with practice can you do physical things without thinking about it, and indeed even without even being aware of it?
I think you’ve answered your own question there: with practice. That is, we work and work and work on the jumbled sense data and train our brain and muscles about ‘do this thing’ until it’s second nature and we don’t have to consciously think about it. Then the brain just predicts “we will have caught that ball that is going to be hit/kicked towards us in twenty seconds” and the body does.
Other things like breathing seem to be hard-wired in and I’m sure someone better with biology can explain what makes us start breathing once we leave the womb.
I think you’re right that prediction alone isn’t going to make anything happen (I can “Believe I can fly” all I want, it’s not going to have me soaring like a bird), but if what is meant is that “prediction is short-hand for saying ‘brain decides to do something and doesn’t have to consciously supervise every tiny motion from making the muscle fibres twitch to telling the legs how to move, it just goes ‘I’m going to have some cereal’ and the lower levels do all the ‘okay, that means we need to get milk out of the fridge, which means walking, legs do your thing – legs tell muscles – muscle cells tell energy reactions to happen’ etc. without top-level brain doing anything else”, then it works. If you decided “I’m going to have some cereal” and then found yourself cleaning the bathroom, I think you would be entitled to be in a state of surprisal and indeed aggressal 🙂
On the subject of cereal, it almost seems like this sort of thing (combined with being disrupted somehow) can explain putting the milk away in the pantry and the cereal in the fridge, but I’m not sure how to put it into words.
I don’t think it necessarily involves mental processes that we’re consciously aware of to the extent you’re suggesting. The first thing this post brought to mind for me was a study about how house flies are so nimble, which I vaguely remembered hearing about years ago. A little Googling turned up this press release. Turns out it’s from 1998! I have no special interest or expertise in anything even tangentially related to the subject, so I don’t know why it would have stuck in my mind well enough to resurface nearly 20 years later, if it hadn’t offered some counterintuitive but nontrivial (if true) insight. From the link:
Possible horrendous sidetrack, but the Greek atomists didn’t use “vague intuition to cook up” atomic theory, it was a logical way of accounting for the possibility of real change in the context of Parmenides’ arguments about Being. If nothing can come from nothing, then whatever exists exists necessarily and can’t not exist, but then change is impossible and must be illusory (Zeno’s argument – he was a student of Parmenides). But if what exists is nuggets of that kind of unchanging Being in a Void, then you can have real change as shifting agglomerations of nuggets of unchanging being. Atoms were called “atoms,” i.e. indivisible, because they were envisioned as just such nuggets of Parmenidean unchanging Being.
Re. the topic: as others are pointing out, these sorts of ideas have echoes in sports advice and musical instrument training.
Indeed. Powers himself considered the higher levels that he proposed to be speculative extrapolations, and recommended that research should begin with the bottom levels, where you would actually have the possibility of getting rigorous results. He followed his own advice on this. The experiments he conducted and the simulation programs he wrote involved simple things like using a mouse to track a moving spot subject to random disturbances. (For example.)
But if you would be interested in psychological investigations on PCT lines, you could look up the work of Warren Mansell. I don’t follow it myself, but he is a research psychologist with a substantial body of work.
I have been fascinated by this theory since I read it. What really struck me is noticing when I glance at something and misidentify what I see at first. I have a fairly good photographic memory and have noticed I can grab a frame of what I “see” in that instant. And to my surprise the misidentified image is right there. For example I recently spotted a possum laying on the ground that had been paralysed by a tick. I have never seen that before and initially identified it as my sister’s cat. The after-image had the colouring and shape of my sister cat even though the possum looked quite different. My brain pasted the expected image into place and it was only after a fraction of a second of the animal not moving in quite the way I expected that the visual image changed to what was really there.
It makes me ponder deeply if there are more persistent illusions that we experience and never unveil because there is no incongruous input and no surprise to alert us.
It’s very nice to see some discussion of PCT again in this group. I agree that PP and PCT are very similar, as theories. This is because they are both applications of control theory, which describes the operational principles of dynamical, closed -loop, negative feedback systems. I believe the essential difference between PCT and PP lies in what these theories are trying to explain. PP is trying to explain what organisms do, where “what they do” (their behavior) is seen as the production of observable actions. PCT is also trying to explain what organisms do, but according to PCT what organisms do is produce controlled results. Controlled results are consequences of actions. An example is the distance between your car and the car in front of you, which is a consequence of actions such as pressing on the accelerator or brakes.
Controlled results are variables — controlled variables –that are controlled in the sense that they are maintained in goal or “reference” states while being protected from external disturbances. So the distance between cars is a controlled variable that is maintained in a reference state (such as a distance of 2 meters) while being protected from disturbances (such as variations in the speed of the car in front). PCT explains this controlling as a result of a negative feedback process where a neural signal that corresponds to a perceptual representation of the controlled variable (the perceptual signal) is kept matching a neural signal that specifies the desired or reference state of that variable (the reference signal). Any difference between the perceptual and reference signals is an error signal drives actions that continuously “push” the controlled variable and, thus, the corresponding perceptual signal, back into a match with the reference signal.
This description makes it clear that the reference signal in PCT is, indeed, functionally equivalent to the “prediction” signal in PP. Where PP and PCT seem to diverge is in the role ascribed to perception in the control process. In PP, the actions that keep perceptions under control are the “main event” while the nature of the perceptions that are controlled are of secondary importance. In PCT it’s the exact opposite; the nature of the perceptions that are controlled are the “main event” while the actions that keep these perceptions under control are of secondary importance. Indeed, in the hierarchical PCT model, the actions that keep a perceptual variable under control are typically controlled perceptual variables themselves. For example, the movements of accelerator and brake that are the actions that control the distance between cars are themselves controlled variables that are brought to reference state and protected from disturbances by variations in the muscle forces that produces these movements.
From a PCT perspective, understanding behavior is a matter of understanding the types of perceptual variables that organisms control. The types of perceptual variables organisms control can, hypothetically, range from simple (such as the perception of the position of a chess piece on the board) to complex (such as strategy being used by placing the piece in that position). Scott Alexander expressed skepticism about the PCT hypothesis that there are control systems in the brain that control perceptual variables as complex as “love” or “communism”. But the fact is that we do see people (including ourselves) controlling these variables. We see it in ourselves when we feel, for example, that we are not being loved as much as we want by someone we love; we see it in ideologues who say that a member of the group are not really a “communist”. In order to make these judgments people must be able to perceive something called “love” and “communism” and have a reference for the states of these variables that specify the right or desired level for the persons controlling them. We don’t need to know how the nervous system (or a computer) can perceive such things in order to know that they are perceived.
The fact that we do, indeed, perceive and control aspects of the world of different levels of complexity is demonstrated in one of my on line demos of control theory:
The demo shows what it means to control perceptions of what Powers called configurations, transitions and sequences. The demo also shows that perceptions of different levels of complexity seem to be hierarchically related, as hypothesized by Powers in “Behavior: The Control of Perception”. A more formal demonstration of this fact can found in: Marken, R. S., Khatib, Z. and Mansell, W. (2013) Motor Control as the Control of Perception, Perceptual and Motor Skills, 117, 236-247
Other demonstrations of PCT phenomena can be found at my website: http://www.mindreadings.com. I have my own set of demos at: http://www.mindreadings.com/demos.htm. And Adam Matic, a PCT-oriented roboticist now working in Spain, has written on-line versions of a set of demos that were originally written by Bill Powers in Pascal, I believe. I think these demos, along with Powers’ “Behavior: The Control of Perception” and my collection of papers called “Mind Readings” (https://www.amazon.com/Mind-Readings-Experimental-Studies-Purpose/dp/096241543X/) provide a nice, concrete introduction to the principles of PCT.
Richard Kennaway said
I just thought I’d mention that I’ve done a bit of investigating on PCT lines as well. My published (and a few unpublished) papers are collected in three books by me, Richard S. Marken:
Mind Readings: Experimental Studies of Purpose (https://www.amazon.com/Mind-Readings-Experimental-Studies-Purpose/dp/096241543X)
More Mind Readings:Methods and Models in the Study of Purpose (https://www.amazon.com/More-Mind-Readings-Methods-Purpose/dp/0944337430/)
Doing Research on Purpose: A Control Theory Approach to Experimental Psychology (https://www.amazon.com/Doing-Research-Purpose-Experimental-Psychology/dp/0944337554/)
I tried to post a comment here earlier today and it didn’t seem to make it. Hope this does.
Hi Scott, I really think you have just scratched the surface of PCT here. For example, PCT has a revolutionary take on research methodology, it also specifies the layers of the hierarchy clearly, and the upper levels of the hierarchy give it the capacity to inform a much wider area of science, such as a universal Psychotherapy known as Method of Levels. In contrast, predictive processing seems to maintain the status quo in terms of how we regard science and society. See pctweb.org.
University of Manchester
I accidentaly stopped here to read something about Control in organisms. And I started to read :
SA : Yesterday’s review of Surfing Uncertainty mentioned how predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy.
Bobi : I’m wondering how brain tries to minimize “error” with moving the limbs into the position needed to produce these sensations ? How brain control limbs to move exctly to some position, probably producing exact amount of excitation in muscle tension, so that limbs will move into “exact” position, producing “exact” sensation. Did I miss osmething ?
It does not need to. All it has to do is something that will get closer to the intended position.
Compare the room thermostat. It does not need to calculate how much energy to pump into or out of the room. If the room is too hot or too cold, it does not need to know why. All it has to do is pump energy in when it is too cold, out when it is too warm, and do nothing when it is close enough to the target temperature. In effect, it delegates the computation of the effect it is having to the system it is controlling. It does not need any sort of model of the system, nor need to make any predictions: the system itself is in effect its own model, and observing the actual result of its actions makes prediction unnecessary.
That’s an exquisite explanation Richard
RK : It does not need to. All it has to do is something that will get closer to the intended position.
From when in PCT “behavior is controlled” ? Or I miissed something. “Brain” in PCT explanation don’t do something (?!) so that limbs will get closer to intended position. “Brain” in PCT control perception of the limbs beside other perceptions (billions of them) to get perception they want. That’s all brain has. Perceptions.
“Error” is varying with incoming variation of perception of both environments not just with “moving limbs to intended position”. It could be that limbs (effectors, output) are involved in changing perception, but it’s not always necessary. Organisms will mostly “correct” errors “automatically. But behavior will be used also if “brain” makes the judgement that it’s needed. Or some movements can also be “fired” unintentionaly, “automatically”.
Example with room thermostat is too old and useless in the case we are talking about. It’s quite obvious that you think that room temperature is regulated by ouptut of thermostat. There are some principles the same, but it’s not how nervous system is functioning. Describe me rather how organisms temperature is controlled ? Why wandering arround like a cat arround “hot milk” ?
Would you say that a homing missile wanders around its target? They use negative feedback highly effectively.
It is, as far as we know, and quite a lot is known about this, just how the nervous system functions to control body temperature in homeothermic animals. There is a control system which senses body temperature, and according to whether the temperature is too high or too low, varies various things such as sweating, burning fat, shivering, making fur stand on end, contracting surface blood vessels, etc. In a fever, the temperature set-point has been raised. It is believed that this enhances the body’s ability to combat invading pathogens.
This control system is located in the hypothalamus. It contains neurons that are especially sensitive to excess heat, and with a little neural computation applied to those signals, other neurons activated by excess cold. It also receives signals from temperature sensors in the skin. (Involuntary shivering is triggered by skin temperature, not core body temperature — this is how that happens.) The most detailed description I’ve found of all this is at http://nba.uth.tmc.edu/neuroscience/s4/chapter03.html.
WM: That’s an exquisite explanation Richard
Bobi : Think Warren with your own head what PCT is about…?
You are much better off reading about it – pctweb.org, but it all becomes clear after a while! The hierarchy of controlled perceptions entails that it’s negative feedback all the way down and no need to computer motor commands in advance – what is computed is the reference values of the appropriate perceptual variables all the way down. See Henry Yin’s exquisite (!) work in the last three years in neuroscience…
RM: I think it’s worthwhile to point out that it’s not that there is “no need” to compute motor commands in advance. It’s that there is no possibility of computing the correct motor commands in advance — the commands that would produce the intended (or “predicted”, per PP) result — in a disturbance prone environment.
RM: Could you post a reference or pointer to Henry’s article (or articles) that you were referring to. I also humbly suggest my spreadsheet model of a three-level control hierarchy as a way to get a feel for how hierarchical control works. The paper (which is reprinted in my book “Mind Readings”) is available at:
and the spreadsheet itself is available at http://www.mindreadings.com/demos.htm. It’s the link at the bottom of that page:”Spreadsheet Simulation of Hierarchy of Control Systems”.
it’s a good start. You probably used Google to find this article. You gave the basics of temperature regulation, although you can go deeper. You could maybe find some relation to “prediction” although in physiological terminology you’ll find other terms.
What you described is basically termoregulation for homeothermic animals. But PCT is about all kind of organisms from bacteria to human. How do you think termoregulation is achieved in micro organisms. With negative control loops or with reorganization ?
No kidding, Sherlock. It is more than you have shown any sign of doing. You even asked earlier “Can you explain to me what PP is ?” to an article that is about PP and defines the abbreviation.
Yes, you probably discovered that by reading what I wrote. Well, that’s something, I suppose.
I could say “you can do better”, but that would be hope over experience. Your writing is rambling and incoherent. I have only replied to that small fraction that makes some discernable sense, and am primarily addressing all the other readers.
Would you say that a homing missile wanders around its target? They use negative feedback highly effectively.
What’s this have to do with “control of behavior” ? You are selling here just something what sparrows and pigeons are already cheeping. The background of PCT is use of negatrive feed back. Although in organisms it can be used also positive feed-back, but inside negative feed-back loops. There are many surprising things organisms do to survive. So you missed the Theme.
What your other post is concerned it’s not clear what you wanted to say. It seems that you think that there is some hierarchy of “controlled perceptions” (?!). Maybe Bill Powers citation will help you remember something about hierarchy of perceptions :
Bill Powers :
Briefly, then: what I call the hierarchy of perceptions is the model. When you open your eyes and look around, what you see — and feel, smell, hear, and taste — is the model. In fact we never experience anything but the model. The model is composed of perceptions of all kinds from intensities on up.
O.K. show us how hierarchy of “controlled perceptions” work” ?
As far as Henry Yin is concerned I think that I wrote so many citations from his work on CSGnet, that you’ll never catch me. But it’s good that you started to read him. Maybe you’ll learn something.
Here are some of his citations :
1. Control of Input. A control system always controls its input, not output . Only perceivable consequences of behavior can be controlled.
2. According to mainstream engineering control theory, a control system controls its outputs, not its input. This is perhaps the most common fallacy today, both in engineering and in the life sciences [49, 55, 56]. This fallacy, an unfortunate legacy of cybernetics, is the result of imposing the perspective of the observer rather than using the perspective of the organism or controller. The mistake is to assume that what the engineer perceives and records, the “objective” effect of the system, is the output of the system.
3. As a result of these conceptual confusions, in traditional models negative feedback is always misunderstood. Placing the comparator outside the organism has the unintended effect of inverting the inside and outside of the system (Figure 5).What should be part of the organism is considered To be a part of the environment, and what should be part of the environment, namely, the feedback function, is considered a part of the organism. Consequently, the equations that describe how forces act on loads and accelerations and decelerations of the loads are assumed to be computed by the nervous system . These conceptual confusions have largely prevented any progress in the study of behavior for many decades.
Bobi : Now it’s maybe more clear what I was talking about. Maybe you could add sone of his citations.
Richard Kennaway wrote :
No kidding, Sherlock. It is more than you have shown any sign of doing. You even asked earlier “Can
you explain to me what PP is ?” to an article that is about PP and defines the abbreviation.
Well, well, well somebody took of his mask and showed his cultural level of we could say Amoeba, or this is an honour for you. You ran out of arguments and you started to insult.
You have PhD ? How did you get it ? I have never saw such a low level of somebody that had academic title. I thought that with my academic title we are on the same level. Tolerant conversation with arguments. But I see you are a great exception. Beside that I could say you are a lyer. Show me directly above my question where is explanation of PP. I wrote that I’m new. From a mass of writings I can’t see yet what is what ? But from your academic title I was expecting help. So could you be so kind and direct me to name who explained PP above my question ? And remember. I’m not from English speaking group, so I need time to read and understand something. You would be surprised how fast I can find information I want in my native language.
Are you always so explosive ?
Richard Kennaway wrote :
Yes, you probably discovered that by reading what I wrote. Well, that’s something, I suppose.
Well you know nothing about me, and you are judging what I’ve read and what I didin’t. Is this the way how you got your PhD ? You didn’t need to “collect” any evidence or put arguments. It’s just so because you said it so and they gave you PhD. Vauu… Which is that University ? Oxford ? Oh my god !!! Such a high class University producing such a cultural low level “trained specialists”. And I was sweating and bleeding for evidences for every word I wrote in my academic title work.
Richard Kennaway wrote :
I could say “you can do better”, but that would be hope over experience.
You are right. I could do probably much better. But I will not for known reasons. Since I’m on CSGnet I was insulted and humiliated, degraded…. You just continued tradition. Maybe it’s so just because I’m from a little country… and I probably in the eyes of big countries don’t deserve human honoured treatment. Why should I explain something… you already know everything !
Richard Kennaway wrote :
Your writing is rambling and incoherent. I have only replied to that small fraction that makes some
discernable sense, and am primarily addressing all the other readers
Well this is quite typical excuse for people who ran out of arguments. Everything in my writings is clear. Question and affirmation are clear that it couldn’ be more. Yours and Warrens are not.
I’ve made exames in anatomy and physiology. So I could possibly explain to you everything what you wanted to know from these fields. You went with weak knowledge onto these fields. It’s your problem. So I advise you to improve knowledge. Maybe it’s better that you learn it for yourself. I personally think that it’s better to use specialized literature like Guyton or Rhoades-Tanner… than Google… I’m sure if there is anybody doctor or psychiatrist here will know what I’m talking about. Or you can ask your colleaguess on Oxford.
You still didn’t answer the question ?
But PCT is about all kind of organisms from bacteria to human. How do you think termoregulation is achieved in micro organisms. With negative control loops or with reorganization ?
I knew it was probably the most difficult question in PCT because Bill Powers introduced theoretical concept “reorganization” on micro organisms in his LCS III (bacteria e-colli) . It’s still pure theoretical concept. Maybe you can change something ?
And I hope we understand ? All this was not necessary. It wasn’t me who was pushing for our public conversation. It was YOU who started. You are old enough to know yourself. You probably know that you have very high “gain”.
Res ipsa loquitur.
I thought it was understandable the first time, or perhaps as understandable as it could be. Though maybe you revised the original post before I read it