Book Review: Behavior – The Control Of Perception

[Epistemic status: I only partly understood this book and am trying to review it anyway as best I can]

I.

People complain that psychology is paradigmless; it never got its Darwin or Newton to tie everything together. Nowadays people are pretty relaxed about that; who needs paradigms when you can do n = 50 studies on a mildly interesting effect? But historically, there were all of these larger-than-life figures who were sure they’d found the paradigm, geniuses who founded schools which flourished for a while, made big promises, then either fizzled out or toned down their claims enough to be accepted as slightly kooky parts of the mainstream. Sigmund Freud. BF Skinner. Carl Rogers. And those are just the big ones close to the mainstream. Everyone from Ayn Rand to Scientology tried their hand at the paradigm-inventing business for a while.

Will Powers (whose name turns out to be pretty appropriate) lands somewhere in the middle of this pack. He was an engineer/inventor who specialized in cybernetic systems but wandered into psychology sometime in the sixties. He argued that everything in the brain made perfect sense if you understood cybernetic principles, and came up with a very complicated but all-encompassing idea called Perceptual Control Theory which explained thought, sensation and behavior. A few people paid attention, and his work was described as paradigm-shifting by no less of an expert on paradigm shifts than Thomas Kuhn. But in the end it never really went anywhere, psychology moved on, and nowadays only a handful of people continue research in his tradition.

Somehow I kept running into this handful, and they kept telling me to read Powers’ book Behavior: The Control Of Perception, and I keep avoiding it. A few weeks ago I was driving down the road and I had a moment of introspection where I realized everything I was doing exactly fit Powers’ theory, so I decided to give it a chance.

Powers specializes in control systems. The classic control system is a thermostat, which controls temperature. It has a reference point, let’s say 70 degrees. If it gets much below 70 degrees, it turns on the heater until it’s 70 again; if it gets much above 70 degrees, it turns on the air conditioner until it’s 70 again. This is more complicated than it sounds, and there are other control systems that are even more complicated, but that’s the principle. Perceptual Control Theory says that this kind of system is the basic unit of the human brain.

While I was driving on the highway a few weeks ago, I realized how much of what I do is perceptual control. For example, I was effortlessly maintaining the right distance from the car in front of me. If the car sped up a tiny bit, I would speed up a tiny bit. If the car slowed down a little bit, I would slow down a little bit. Likewise, I was maintaining the right angle relative to the road: if I found myself veering right, I would turn slightly to the left; if I found myself veering left, I would turn slightly to the right.

The theory goes further: while I’m in the car, I’m also operating as my own thermostat. I have a desired temperature: if I go below it, I’ll turn on the heat, and if I go above it, I’ll turn on the AC. I have a desired level of satiety: if I’m hungry, I’ll stop and get something to eat; if I’m too full, there’s maybe not a huge amount I can do but I’ll at least stop eating. I have a desired level of light: if it’s too dark, I’ll turn on the lights; if it’s too bright I’ll put down the sun visor. I even have a desired angle to be sitting at: if I’m too far forward, I’ll relax and lean back a little bit; if I’m too far back, I’ll move forwards. All of this is so easy and automatic that I never think about it.

Powers’ theories go further. He agrees that my brain sets up a control system to keep my car the proper distance from the car in front of it. But how do I determine “the proper distance”? That quantity must be fed to the system by other parts of my brain. For example, suppose that the roads are icy and I know my brakes don’t work very well in the ice; I might keep a much further distance than usual. I’ll still be controlling the distance, I’ll just be controlling it differently. If the brain is control systems all the way down, we can imagine a higher-tier system controlling “accident risk” at some level (presumably low, or zero) feeding a distance level into a lower-tier system controlling car distance at whatever level it receives. We can even imagine higher systems than this. Suppose I’m depressed, I’ve become suicidal, I want to die in a car accident, but in order not to scandalize my family I have to let the accident happen sort of naturally. I have a top-level system controlling “desire to die” which tells a middle-level system controlling “accident risk” what level it should go at (high), which in turn tells a lower-tier system controlling “car distance” what level it should go at (very close).

It doesn’t even end there. My system controlling “car distance” is sending signals to a lower-tier system controlling muscle tension on my foot on the accelerator, giving it a new reference level (contracted muscles that push down on the accelerator really hard). Except this is an oversimplification, because everything that has to do with muscles is a million times more complicated than any reasonable person would think (at least until they play qwop) and so there’s actually a big hierarchy of control systems just going from “want to go faster” to “successfully tense accelerator-related muscles”.

II.

Actually, Powers is at his most convincing when he talks about these lower-level functions. At this point I think it’s pretty mainstream to say that muscle tension is set by a control system, with the Golgi tendon organs giving feedback and the spinal cord doing the calculations. Powers goes further (and I don’t know how mainstream this next part is, but I’m guessing at least somewhat), saying that this is a first-tier control system, which is itself controlled by a second-tier “direction” control system centered in the nuclei of the brainstem, which is itself controlled by a third-tier “position” control system centered in the cerebellum/thalamus/midbrain (a friendly amendment might add the basal ganglia, which Powers doesn’t seem to know much about).

If you stimulate certain parts of a cat’s midbrain, it will go into specific positions – for example, a position like it’s ready to pounce. So it seems like those areas “code for” position. But in order to have a neuron/area/whatever that codes for position, it needs to have hierarchical control over lots of lower-level things. For example, it needs to make sure the leg muscles are however tense they’re supposed to be in a pouncing position. So the third-tier position control system controls the second-tier direction control system at whatever level is necessary to make the second-tier direction control system control the first-tier muscle control system at whatever level is necessary to get the muscles in the right position.

The fourth- and fifth-tier systems, now well into the cortex (and maybe basal ganglia again) deal with sequences, eg “walking” or “playing a certain tune on the piano”. Once again, activating a fourth/fifth-tier system will activate this higher-level concept (“walking”), which alters the reference levels for a third-tier system (“getting into a certain position”), which alters a second-tier system (“moving in a certain direction”), which alterns a first-tier system (“tensing/relaxing muscles”).

Why do I like this theory so much? First, it correctly notes that (almost) the only thing the brain can actually do is change muscle tension. Yet we never think in terms of muscle tension. We don’t think “I am going to tense my thigh muscle, now untense it, now tense my ankle muscle, now…”, we just think “I’m going to walk”. Heck, half the time we don’t even think that, we think “I’m just going to go to the fridge” and the walking happens automatically. On the other hand, if we really want, we can consciously change our position, the level of tension in a certain muscle, etc. It’s just that usually we deal in higher-level abstractions that automatically carry all the lower ones along with them.

Second, it explains the structure of the brain in a way I haven’t seen other things do. I always hear neuroscientists talk about “this nucleus relays signals to that nucleus” or “this structure is a way station for this other structure”. Spend too much time reading that kind of stuff, and you start to think of the brain as a giant relay race, where the medulla passes signals onto the thalamus which passes it to the basal ganglia which passes it to the frontal lobe and then, suddenly, thought! The obvious question there is “why do you have so many structures that just relay things to other structures?” Sometimes neuroscientists will say “Well, some processing gets done here”, or even better “Well, this system modulates that system”, but they’re always very vague on what exactly that means. Powers’ hierarchy of fifth-tier systems passing their calculations on to fourth-tier systems and so on is exactly the sort of thing that would make sense of all this relaying. My guess is every theory of neuroscience has something at least this smart, but I’d never heard it explained this well before.

Third, it’s the clearest explanation of tremors I’ve ever heard. Consider the thermostat above. When the temperature gets below 65, it turns on the heat until the temperature gets above 70, then stops, then waits as the hot air leaks out through the window or whatever and it’s 65 again, then turns on the heat again. If we chart temperature in a room with a thermostat, it will look sort of like a sine wave or zigzag with regular up/down motions. This is a basic principle of anything being controlled by a less-than-perfect control system. Our body has microtremors all the time, but when we get brain damage or some other problem, a very common symptom is noticeable tremors. These come in many different varieties that give clues to the level of brain damage and which doctors are just told to memorize. Powers actually explains them:

When first-order systems become unstable, as when muscles exert too much effort), clonus oscillations are seen, at roughly ten cycles per second. Second-order instability, as in the tremors of Parkinsonism, involves groups of muscles and is of lower frequency, around three cycles per second or so. Third-order instability is slower stilll, slow enough that it can be characterized as “purpose tremor” or “over-correction”. Certain cerebellar damage due to injury or disease can result in over- and under-shooting the mark during actions such as reaching out to grasp something, either in a continuous self-sustained oscillation or a slowly decrasing series of alternating movements.

This isn’t perfect – for example, Parkinsonian tremor is usually caused by damage to the basal ganglia and the cortex, which is really hard to square with Powers’ claim that it’s caused by damage to second-tier systems in the medulla. But after reading this, it’s really hard not to think of tremors as failures in control systems, or of the different types of tremor as failures in different levels of control system. For example, athetoid tremors are weird, seemingly purposeful, constant twisting movements caused by problems in the thalamus or some related system; after reading Powers, it’s impossible for me not to think of them as failures in third-order control systems. This becomes especially clear if we compare to Powers’ constant foil/nemesis, the Behaviorists. Stick to a stimulus-response paradigm, and there’s no reason damaged brains should make weird twisting movements all the time. On a control-systems paradigm, it’s obvious that that would happen.

There are occasional claims that perceptual control theory can predict certain things about muscles and coordination better than other theories, sometimes with absurdly high accuracy of like r = 0.9 or something. Powers makes some of these claims in the book, but I can’t check them because I don’t have the original data he worked with and I don’t know how to calculate cybernetic control system outputs. But the last time I saw someone bring up one of these supposed experiments it was thoroughly shot down by people who knew more statistics. And I found a blog post where somebody who knows a lot about intricacies of muscle movement says PCT can predict some things but not much better than competing theories. In terms of predicting very specific things about human muscular movement its record seems to be kind of so-so.

III.

And I start to get very skeptical when Powers moves to higher-tier control systems. His sixth tier is “relationships”, seventh is “programs”, eighth is “principles”, and ninth is “systems”. Although these tiers receive just as many pages as the earlier ones, they start sounding very abstract and they correlate a lot less well with anatomy. I understand the urge to postulate them – if you’ve already decided that the fundamental unit of the brain is the control system, why not try to explain things with control systems all the way up? – but it becomes kind of a stretch. It’s easy to see what it means to control the distance between me and the car in front of me; it’s harder to see what it means to control for “communism” or “honesty” or things like that.

I think the way things are supposed to work is like this. A ninth-tier system controls a very abstract concept like “communism”. So suppose you are a communist; that means your internal communism-thermostat is set to maintain your communism at a high level. That propagates down to eighth-tier principles, which are slightly less abstract concepts like “greed”; maybe your ninth-tier communism-thermostat sets your eighth-tier greed thermostat to a very low temperature because communists aren’t supposed to be greedy. Your eighth-tier greed thermostat affects levels of seventh-tier logical programs like “going to work and earning money” and “giving to charity”. I’m not really sure how the sixth-tier fits into this example, but let’s suppose that your work is hammering things. Then the fifth-tier system moves your muscles in the right sequence to hammer things, and so on with all the lower tiers as above.

Sometimes these control systems come into contact with each other. For example, suppose that along with my ninth-tier system controlling “communism”, I also have a ninth-tier system controlling “family values”; I am both an avowed communist and a family man. My family values system thinks that it’s important that I earn enough to provide for my family, so while my communism-system is trying to input a low reference level for my greed-thermostat, my family-values-system is trying to input a high one. Powers gets into some really interesting examples of what happens in real industrial cybernetic systems when two opposing high-level control systems get in a fight, and thinks this is the source of all human neurosis and akrasia. I think he later wrote a self-help book based around this (hence the nominative determinism). I am not very convinced.

Am I strawmanning this picture? I’m not sure. I think one testable consequence of it is supposed to be that if we’re really controlling for communism, in the cybernetic control system sense, then we should be able to test for that. For example, hide Lenin’s pen and paper so that he can’t write communist pamphlets, and he should start doing some other communist thing more in order to make up for it and keep his level of communism constant. I think some perceptual control theory people believe this is literally true, and propose experimental tests (or at least thought experiment tests) of perceptual control theory along these lines. This seems sketchy to me, on the grounds that if Lenin didn’t start doing other stuff, we could just say that communism wasn’t truly what he was controlling.

That is, suppose I notice Lenin eating lots of chocolate every day. I theorize that he’s controlling for chocolate, and so if I disturb the control system by eg shutting down his local chocolate store, he’ll find a way to restore equilibrium, eg by walking further to a different store. But actually, when I shut down his local chocolate store, he just eats less chocolate. In reality, he was controlling his food intake (as we all do; that’s what an obesity set point is) and when he lost access to chocolate, maybe he ate cupcakes instead and did fine.

In the same way, maybe we only think Lenin is controlling for communism, but he’s actually controlling for social status, and being a communist revolutionary is a good way to gain social status. So if we make it too hard for him to be a communist revolutionary, eg by taking away his pen and paper, maybe he’ll become a rock star instead and end up with the same level of social status.

This sort of thing seems so universal that as far as I can tell it makes these ideas of higher-tier control systems unproveable and unfalsifiable.

If there’s any point to them at all, I think it’s the way they express the same interesting phenomenological truth as the muscle movement tiers: we switch effortlessly between concentrating on low-level concepts and high-level concepts that make the low-level ones automatic. For example, I think “driving” is a good example of Powers’ seventh tier, “programs” – it involves a predictable flowchart-like set of actions to achieve a simple goal. “The distance between me and the car in front of me” is a sixth-tier system, a “relationship”. When I’m driving (focusing on my seventh-tier system), I don’t consciously think at all about maintaining the right distance with the car in front of me. It just happens. This is really interesting in a philosophy of consciousness sense, and Powers actually gets into qualia a bit and says some things that seem a lot wiser and more moving-part-ful than most people on the subject.

It does seem like there’s something going on where my decision to drive activates a lot of carefully-trained subsystems that handle the rest of it automatically, and that there’s probably some neural correlate to it. But I don’t know whether control systems are the right way to think about this, and I definitely don’t know whether there’s a sense in which “communism” is a control system.

IV.

There are also some sections about things like learning and memory, which looks suspiciously like flowcharts of control systems with boxes marked “LEARNING” and “MEMORY” in them.

But I realized halfway through that I was being too harsh. Perceptual control theory wasn’t quite a proposal for a new paradigm out of nowhere. It was a reaction to Behaviorism, which was still the dominant paradigm when Powers was writing. His “everything is a control system” is an attempt to improve on “everything is stimulus-response”, and it really does.

For example, his theory of learning involves reward and punishment, where reward is reducing the error in a control system and punishment is increasing it. That is, suppose that you’re controlling temperature, and it’s too hot out. A refreshing cool glass of water would be an effective reward (since it brings you closer to your temperature reference level), and setting your hand on fire would be an effective punishment (since it brings you further from your temperature reference level). Powers notes that this explains many things Behaviorism can’t. For example, they like to talk about how sugar water is a reward. But eventually rats get tired of sugar water and stop drinking it. So it seems that sugar water isn’t a reward per se; it’s more like reducing error in your how-much-sugar-water-should-I-have-and-did-I-already-have-the-right-amount system is the reward. If your optimal level of sugar water per day is 10 ml, then anything up to 10 ml will be a reward, and after that it will stop being attractive / start being a punishment.

As a “theory of learning”, this is sort of crappy, in that I was expecting stuff about Hebb and connectionism and how memories are stored in the brain. But if you’re living in an era where everybody thinks “The response to a stimulus is predictable through patterns of reward and punishment” is an A+++ Nobel-Prize-worthy learning theory, then perceptual control-based theories of learning start sounding pretty good.

So I guess it’s important to see this as a product of its times. And I don’t understand those times – why Behaviorism ever seemed attractive is a mystery to me, maybe requiring more backwards-reading than I can manage right now.

How useful is this book? I guess that depends on how metaphorical you want to be. Is the brain a control system? I don’t know. Are police a control system trying to control crime? Are police a “response” to the “stimulus” of crime? Is a stimulus-response pairing a control system controlling for the quantity of always making sure the stimulus has the response? I think it’s interesting and helpful to think of some psychological functions with these metaphors. But I’m not sure where to go from there. I think maybe there are some obvious parallels, maybe even parallels that bear fruit in empirical results, in lower level systems like motor control. Once you get to high-level systems like communism or social desirability, I’m not sure we’re doing much better than the police-as-control-system metaphor. Still, I think that it’s potentially a useful concept to have.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

164 Responses to Book Review: Behavior – The Control Of Perception

  1. srconstantin says:

    The obvious relevance of PCT is that anything that a “control system” can do, a reinforcement learner can do. If we think motor control runs on PCT, then most human- or animal-solvable motor control problems should be solvable by a reinforcement learner (like DeepMind’s Atari-game-playing algorithm.)

    • jimrandomh says:

      Not quite! PCT can do one thing that reinforcement learners can’t: avoid unnecessarily risky exploration. If you have a gauge and it’s *unlabeled*, a reinforcement learner won’t start keeping it in the middle until it’s gotten at least one bad result from letting it stray out of bounds. But PCT can have a prior that avoids doing that without having gotten any feedback, or that ensures that the first time an unlabeled gauge goes out it’s in an otherwise-safe circumstance with few confounders to make it hard to interpret.

  2. apm says:

    That is very nice review. For me, ‘where to go from here’ after reading B:CP, was reading more about the theory. I loved the the concept of the Method of levels psychotherapy (book), though I can’t say I have much experience with it. I was really hooked after arm control and motor learning simulations from LCSIII (book with simulations).

    Here is an old tutorial by Bill Powers on applying control theory to experimental psychology (it can take a while to get a feel).

    Bill wasn’t the first to connect the concept of servo-systems with living organisms, that would probably be Weiner, Ashby and the cybernetics crowd. In a way, as much as PCT was a reaction against behaviorism, it was a reaction against old cybernetics, or rather an attempt to fix cybernetics. I tend to think he does provide a good foundation for applying control theory to psychology or biology, a good set of methods and hypotheses to check. We’ll see how they hold up to empirical verification.

  3. Shkaal says:

    I’m not sure if that’s what you were alluding to, but the Lenin experiment has been done, and it seems that he was really controlling for communism:
    https://returnstosender.wordpress.com/2012/04/22/lenins-birthday-part-one-on-writing-in-invisible-ink/

  4. jamesbarney says:

    I haven’t read the book but I think this makes a lot of sense. Especially for things like cleanliness, loneliness, closeness.

    It seems like this theory explains how a each person has set level of cleanliness and they just generate dirt and trash until they go beyond their “Time to clean” barrier. My wife she experiences this feeling when there is a dirty dish in the sink that she knows about. I experience it when gnats are buzzing around me like Pigsty from Charlie Brown.

    I think some predictions of this theory would be that there would be weird overlaps. Like washing your hands might make you feel less guilty. Or that when people say something that comes across as too aggressive in a conversation they will try to be extra passive for a little while until they’re aggression thermometer comes back in line.

  5. Null Hypothesis says:

    Control Engineer here (among other things).

    Any EE will tell you this is exactly what’s going on at a fundamental level, because it’s plainly obvious and the only way to describe it. Functionally it is what’s happening on the neuron level, and conceptually it’s what’s happening on the systems-level.

    This is the key part some skeptical people may not get. The world is not perfect. Your body is not perfect. Everything is noisy and approximate. Everything vibrates. Everything has surface flaws. Yet somehow we are still able to do very precise things

    Your HDD right now, that’s providing large archive storage to supplement your SSD is a flat disk spinning at 7200rpm with internal wind sheer of over 70mph. And the lever holding the magnetic head that reads and changes bits is literally being held at a precise height of a few nanometers above this spinning flat disc inside the vortex-from-hell.

    How the hell are we supposed to keep that head at 7 nanometers above the surface at all times? The answer is control systems systems that feed their error back into their inputs. In the case of your hard drive disk, the head is built in such a way that a physical control system is acting – the forces from the wind push down and hold the head down to the surface of the disks, but also push up if it gets too close. There’s a restorative force that pushes it towards a goal, and holds it there within just a few nanometers of variance.

    Open-loop control is closing your eyes and going through an exact procedure, and hope your outcome will be what you want it. And it will under the assumption of perfect models and perfect environments. Closed-Loop control permits you to very robustly control a very noisy system around an equilibrium, by constantly adjusting back and forth in proportion to your error (as well as integrals and derivatives of that error).

    Control system analysis is about looking at systems, and calculating their instability. The conditions under which they will deviate from a desired norm. A control system is placed on top of that, and calculations are made to determine the parameters of the control system necessary to make it stable. These often encompass ranges. of possible values.

    The icy-roads example Scott gave is perfect. Another one would be how old people walk slowly, with smaller steps. (Or how a regular person walks while carrying something fragile or valuable or unwieldy). In those instances, you're determining the stability bounds of the system. When you take longer steps, if your center of mass gets off-center it's more difficult to re-position your legs underneath it. If your reaction time, or muscle control precision is lower, you cannot guarantee 99% that you can surpress a deviation when a problem arises. In general you determine that you cannot operate at certain speeds because making them stable would require stronger muscles, faster reflexes, or better traction that you currently have. This is the same logic that governs power plants as well – they have a 'closing time' – the amount of time during which they have to respond to 'an event's before they can no longer reduce the deviance, and now a generator is getting out of sync and it has to be shut down and restarted, sometimes taking other generators with it. That's when you have a power outage for 10 minutes.

    As a different example to show that human's aren't special, consider how birds fly through forests. There are lots of tree branches in the way. How do they fly so fast and know the perfect path ahead of time? They don't. They are just playing a real-life version of those path-scrolling games. They see an object and they dodge it. And what they do, is look at roughly how dense the branches are. When it’s less dense, they go faster, not because they see a path, but because they have strong confidence that there will be a path and furthermore that they’ll be able to adjust their course in time to make the openings. If the branches are more dense, they lower their flight speed because they can no longer maneuver quickly enough to dodge through the gaps 99% of the time.

    Going back to the original point, there is a step beyond this. Humans don’t just have these set control systems running. We learn new systems, and adjust the ones we already have. This has to do with ‘Adaptive Control’ – a Graduate-school concept. The math for this was developed to control our space craft in the 60’s. It’s one thing to take a known system and make a sufficient control system for it through analysis. It’s quite another to make a meta-control system where it improves the control system that’s already running. You’ve got two or more control loops running together, estimating the system based on how your current control system causes changes to the result, and another adjusting the control system itself to improve the results.

    This is what happens for anyone playing videogames. When I want a drink, I walk to the fridge and don’t even think about it. Walking involves telling my foot, leg, hip, and waist muscles to twitch at the right times. When I want to look somewhere I tell my eyes and neck to twitch. When I’m playing a videogame after about 10 seconds at the controls I have very fine control of walking and looking. And they involve tensing my thumbs without even thinking about my thumbs. And the tumbs are just setting positions on joysticks, which control the cameras translation and panning velocities. It’s noisy turtles all the way down.

    From a biological perspective, neurons are acting as control systems. A neural net is a bunch of additive and supressive signals sent together, and then sent back. Anytime you have feedback, you have a feedback system. And thus a control system. Computer neural-nets work using markov chains and statistics and backpropegation and two dozen other methods that have been developed over time. And you’d be surprised how much of an overlap there is between the mathematics using for Machine Learning, and Adaptive controls. (And be equally surprised on how little those two fields seem to talk to each other. Machine learning in the past few years are often solving problems Adaptive Controls did 30 years ago, in a much more complicated fashion. They should be using the ready-made stuff, so they can apply the fuzzier techniques to the next stage of the problem.)

    As a side-note, this is also why I roll my eyes at the ‘dangerous AI’ stuff Scott and a bunch of other people push out. AI won’t ever ‘go rogue’ the way we’re doing it now, or are ever likely to do in the near future, because those systems aren’t doing the meta-level adaptive controls on themselves. They’re deterministic, statistical systems being managed by a control system.

    A control system can keep the head of a needle in a hell-vortex precisely 7 nanometers above a spinning silicon disk, if it so chooses. It shouldn’t be such a surprise that a control system can also manage to correctly classify an object from a thousand different viewing angles in a picture. The computation necessary is greatly increased, but the mathematics and the process are much the same.

    Control systems are great because they really make it seem like your systems are alive. It’s why I love it. It’s why when you watch those Boston Dynamics videos, those robots seem like animals and you immediately want to project a personality onto them. But you still understand that balancing on two legs is a control system and don’t fear about the robot’s intelligence. You don’t consider them ‘smart’ – just ‘alive’. Yet you see it correctly analyze a bunch of breast tissue biopsies and start making Terminator references. Even though fundamentally the same computers are doing the same kind of tasks of equal complexity.

    Any time you have feedback, you can talk about and analyze it in the context of a ‘control system’. That’s a broad definition, which is why it works so well. But at the same time it’s still a very specific thing with specific attributes that result in robustly controlling values in a very noisy and uncertain world. Anything you see in your life that manages to stay a certain way – like say your your body temperature, is doing so in a very uncertain, changing, chaotic world. There’s no way it could remain where it is, without feedback. And life is the definition of maintaining a certain state – a certain pattern of information, in differing environments.

    Now, this is all corresponding to the lower-level mechanical stuff, and the lizard-brain actions our mind learns and then takes without thinking – like driving. Acrobats, or cooks that toss their dough to partners across the room, or really any other show of humans acting like perfect robots is all being done through control systems. You’ll notice sometimes how much harder it can be to do things like play the piano or ride a bike or do a flip when you’re actively thinking about it. Your brain is trying to insert its own direct thoughts into how to move your muscles, rather than letting the well-calibrated control circuits manage it for you. This is all taking place in the part of our brain that runs and does without us knowing, thinking, or caring about it. As such, it starts to lose its arguments when you get to the actively-conscious sort of things.

    The idea that our brain runs on control systems goes through an almost digital shift from ‘so painfully obvious as to be beyond mentioning’ to, as you say, ‘abstract, unprovable, and unfalsifiable’ in an instant, once you get to the high-level stuff.

    However, since most other psychological paradigms are equally worthy of the latter criticism, that’s not much of a point against him. It’s an excellent way to look at lower-level functions, and any forms of brain-damage that manifest in being unable to physically do normal-people things (like not have tremors). It’s also an excellent way to look at back-of-the-mind disorders, where people can’t do things that everybody does – but nobody knows how they do it – like recognizing a face.

    • meltedcheesefondue says:

      >AI won’t ever ‘go rogue’ the way we’re doing it now, or are ever likely to do in the near future, because those systems aren’t doing the meta-level adaptive controls on themselves.

      I think this is very insightful and partially misleading. The insight is that it is a good way of describing why current-and-near-future AIs are safe (a fact which anyone serious agrees on). I’ve been using concepts such as “power” and “task designed for” (as in, a spam filter isn’t really designed to “filter spam”, but to optimise a feedback signal in certain specific circumstances) to capture this fact, but the level of the control system is another good way of seeing this.

      But I think it’s partially misleading because it doesn’t give a full “this AI is safe” criteria. Very simple AI systems can cause large effects (see the flash crash, and, if nuclear missiles were controlled by a simple AI system, then a simple failure there could have huge consequences). A simple AI trained in human propaganda and with some economic or political goal is more dangerous than a boxed theorem prover with higher-level adaptive control. There are also cases where the power and danger of the AI increases when its action space increases (or when it optimises over more complicated feedbacks), keeping everything else constant. So I think seeing it as control system is yet another good rule of thumb for when an AI might be safe, but it’s not a clear barrier with “this is safe” on one side.

    • Steve Sailer says:

      Thanks.

      Very informative.

      Let me make a prediction, however: the part of this excellent comment that will get the most response will be:

      “As a side-note, this is also why I roll my eyes at the ‘dangerous AI’ stuff Scott and a bunch of other people push out. AI won’t ever ‘go rogue’ the way we’re doing it now, or are ever likely to do in the near future, because those systems aren’t doing the meta-level adaptive controls on themselves. They’re deterministic, statistical systems being managed by a control system.”

      Why? Because that part gives people something to argue about. In contrast, the rest of the comment is so masterful that it all comes across as: Well, of course, control systems have to work this way. I couldn’t come up with much to argue with you about over how control systems work because you obviously know vastly more than I do.

      On the other hand, I might try to get into an argument with you over the AI Menace because I wouldn’t obviously lose as badly in a speculative debate over the Skynet Threat.

      People like to argue.

      • Null Hypothesis says:

        Well, it shouldn’t come as a surprise to anybody that I’m vastly better at predicting and controlling machine behavior than human behavior.

        On the other hand, as you put it I suppose that’s really the only part to argue specifics about. And if you personally desire to, be my guest. Everything else was really meant to just be conceptual and informative rather than argumentative.

      • vaniver says:

        You know, some of us are experts in both control systems and AI Menace. 😛

      • Bugmaster says:

        Guilty as charged; but why is that a bad thing ? I consider it to be bad etiquette to post stuff to the extent of “yes you are totally right” or “I agree with you 100%”, etc. It’s a bit of a waste of bandwidth, IMO.

      • ChelOfTheSea says:

        Hello again, Toxoplasma.

      • iansimon says:

        It sounds like you have independently discovered Gresham’s Law of Internet Debates: https://en.wikipedia.org/wiki/Gresham's_law

        I don’t have a good citation for the Internet Debates version, but I’m fairly confident it has been independently discovered many times.

    • Enkidum says:

      Nice response. A few questions…

      1: At the end you talk about the interference from active, deliberate, conscious thought. I’m not sure if I missed this, but are you suggesting that these active, deliberate, conscious thoughts are not operating in the control-system framework? There’s certainly a fair bit of evidence that such thought operates quite differently from most of the stuff “below the hood”.

      2: You say that this is what is happening at the neuronal level. I guess you mean the neural network level, rather than the single neuron level? Because I don’t see how individual neurons act much like thermostats. They just provide (very rough) weighted sums of their inputs. (Well, it’s really more complicated than that, but computer modellers like to pretend that’s what’s going on.)

      3: Not really a question, but there are a few things that seem to be missing from the pure control system idea. Most generally, the idea of goals. I’d agree that perception and action both involve prediction/comparison/adjustment cycles, but I think in order to make the comparisons, you have to have a target to compare input to. And (I think this is kind of Scott’s criticism too) I don’t see how you get these targets by just piling control systems on top of each other.

      I think there’s a very easy trap to fall into, which is the assumption that macro and micro scales must work the same way and involve the same basic principles. So it has to be control systems (or whatever you think is the principle governing the lowest scale) all the way down. Which is just wrong, in a very basic sense. Understanding houses is not simply a matter of scaling up one’s understanding of bricks. As I noted above, single neurons don’t seem to be usefully modelled as control systems, but when you wire them up together in particular ways they’re pretty close to it. But when you wire a bunch of those control systems together… why should the large-scale result be another control system?

      • Null Hypothesis says:

        1. Well, that’s the difficult part. You can keep stacking control systems on top of control systems, to ever increasing levels of abstractness. But where is the prime driver that sets the arbitrary goals at the top. Conceptually it doesn’t make sense, because that’d have to be a really smart Gremlin. You can keep stacking Turtles all the way down, but at the bottom there still has to be an elephant. And I have no idea what that elephant might look like.

        2. Yes, at the network level. Neurons trigger actions which trigger other neurons which trigger other actions that work their way back to the original neuron with some different response. All I really meant by that is “there’s feedback”.

        3. See part 1. If you can figure out what the Elephant looks like, you’ll have discovered the mote of consciousness that fuels higher-functioning beings. You can do some useful things on lower levels. But that’s often where their psychology is being influences in part by physiology that’s more understandable.

        • Aapje says:

          And I have no idea what that elephant might look like.

          Control systems have settings. Why do you keep X distance to the car in front of you and not X/2 or 2*X? There is a setting there. This setting is presumably influenced by evolution. Those with bad settings die more often/reproduce less and thus a certain setting becomes dominant.

          Furthermore, there is an arbitrariness about which high level control systems you have. Why do most people want to have sex? Presumably because there is strong evolutionary pressure on having a high level control system that values having sex and gives high priority to it.

          Arguably our control systems became more complex due to evolutionary pressure and at a certain moment you got emergent properties, where consciousness, communication, social norms, etc were enabled by the ground work laid for more basic evolutionary ‘goals.’ Then those greatly improved reproduction chances, so humans adapted to become better at those things.

    • Markus Karner says:

      Completely agree. And it’s not just psychology: anywhere from ecology to monetary policy, control theory already has developed pretty much all the tools needed to achieve system stability. But stability is achieved if and only if the conditions are met, example, no phase shift >180 degrees that would turn negative feedback into positive feedback; generally speaking, control of oscillations by either slowing the system down or achieving sufficiently fast control for the application etc. (Bode plot anyone?).

      What still baffles me is, why is this not widely known? And I am not an EE. I was originally trained as an ecologist. An even here we have the logistic equation where negative feedback fails and leads to chaos or oscillations under certain conditions. At the very least, anyone leaving college should know what positive feedback is, what negative feedback is, and when negative feedback breaks down.

      • Fractalotl says:

        I was also trained as an ecologist, and want to offer Donella Meadows excellent essay on “Leverage Points: Places to Intervene in a System” – she describes control systems and feedback loops in a way that I found very clear. Specifically related to systems on a societal level, but might also be useful here:
        http://donellameadows.org/archives/leverage-points-places-to-intervene-in-a-system/

        • Kaj Sotala says:

          Just read that essay, and it strikes me as the most important thing on the topic of politics, society, and effecting change that I’ve read in several years.

          Thank you for linking it, I recommend it to everyone else here. Serious viewquake material.

          • Swimmy says:

            I found it really confusing. Since the author keeps using NAFTA and trade, I will too.

            I can think of a way for NAFTA to fit into every one of these categories except the top few. Most of these don’t represent my actual views, just possible arguments:

            12: NAFTA was mostly a change in some tax laws. Taxes are right there on the list for 12.
            11: Tariffs between countries are a buffer to how much trade occurs. Change the tariffs, change the buffer.
            10: America’s entire infrastructure against inequality was based on manufacturing, and NAFTA destroyed those manufacturing jobs. Now we have massive inequality and no decent blue-collar jobs.
            9: Removing so many tariffs all at once like that caused a massive increase in the speed of market feedback, hence the “giant sucking sound” of jobs going to Mexico you always hear about.
            8: Obviously NAFTA reduced the negative feedback of market excess and the power of democracy to affect how our labor chain should be constructed.
            7: NAFTA increased economic growth rates, increasing the positive feedback loop.
            6: In some ways NAFTA can be seen as a reduction in public information, by dispersing it across geological grounds. Corporations know exactly what they’re doing, because they’re centralized. Americans know very little about what any individual corporation is doing in Mexico. The greater the incentive for corporations to disperse operations, the less information in the hands of the public.
            5: The author places NAFTA here.
            4: The author’s argument for NAFTA being #5 could just as easily place NAFTA here. As she says, “Its rules exclude almost any feedback from any other sector of society. Most of its meetings are closed even to the press (no information flow, no feedback).” Hence, it gives corporations all the power to self modify and citizens no power to stop it.

            3 and on: Hard to fit NAFTA here, it was a tool rather than a toolmaker.

            Even if the list is in the correct order and has the correct structure, you can fit whatever you want wherever you want. I could do the same thing with money to public education, abortion, whatever. The author says there are tons and caveats and nuances. To me it seems like there are too many for the model to be more useful than traditional wisdom like “focus on the rules of the game, not the players” or “institutions are more important than laws.”

            (BTW obviously NAFTA falls into #12. My obvious obviously isn’t her obvious.)

          • whateverthisistupd says:

            I would agree with that Swimmy.

            There’s also some pretty inconsistent suggestions. (I doubt that a global government with global regulations would tend towards maintaining the diversity of human cultures) The essay as a whole makes important and interesting points, but it loses muster as it gets towards specifics, and start sounding very typical left partisany.

            The practical examples of unintuitive leverage points are good, but then as the list goes on, the author seems to get away from how one actually leverages them-ignoring the moloch problem.

      • Peter says:

        no phase shift >180 degrees that would turn negative feedback into positive feedback

        A fun little system that has interesting negative-into-positive-feedbackness: a single-sensor line following robot. Two independently controlled wheels (steers like a tank), a caster for balance, a reflectance sensor pointing at the ground (basically an LED and a light detector), and a microcontroller. On the floor there’s some white tape in a circuit against a dark background. The “aim” is to get the robot to follow the circuit around.

        If the circuit is convex, this is easy. Have the robot turn left or right to control the amount of light the sensor is getting, otherwise go forwards. If the circuit is concave, then that strategy will fail on either a left turn or on a right turn.

        There are of course ways to stop and shimmy left and right to find out which sort of turn you are on, there are even some you can do if your sensor is digital – it’s not hard to apply the methodology of Unprincipled Hacking to get this behaviour. Would there be a good control theory way of doing this? Is this like control theory’s version of the XOR problem which caused neural networks to be abandoned for a generation?

        Anyway, is this the sort of thing you were talking about? Have I misunderstood completely.

        • The Nybbler says:

          Maybe you could abuse a PI controller to do it. Suppose your sensor has a maximum value of 1.0 (at dead center) and a minimum value of 0.0 (off the tape). You give it a set point of say, 0.9. Then you make the proportional term turn it one way and the integral term turn it the other; the proportional term saturates as the thing gets off-track when it turns the wrong way, then the integral term turns it back.

          I think you might be able to tune this to work in a toy environment but it sure isn’t something I’d like to rely on.

        • Richard Kennaway says:

          The basic problem with the proposed system is that the sensor is not providing the information that the controller needs. The controller needs to know the deviation of the robot from the line, but the sensor is only providing the absolute magnitude of that deviation. The method of shimmying back and forth across the line can be understood (making it “Principled Hacking”) as an attempt to synthesize the required perception from the deficient one. It is probably simpler and more efficient to give the robot a pair of sensors, or even a whole camera, that more directly provides the perception that is wanted.

          • Machina ex Deus says:

            @Richard Kennaway:

            The basic problem with the proposed system is that the sensor is not providing the information that the controller needs. The controller needs to know the deviation of the robot from the line, but the sensor is only providing the absolute magnitude of that deviation.

            In other words, the basic problem is that the sensor accurately models most of real life.

          • Richard Kennaway says:

            @Machina ex Deus, “In other words, the basic problem is that the sensor accurately models most of real life.”

            Er, huh? A self-driving car can’t work with that sensor. Can’t get more real life than betting your life on it working.

          • Machina ex Deus says:

            The magnitude-only sensor is bad, but sometimes it’s all we have to work with, especially in the parts of real life that aren’t as simple as driving:

            “Doctor, I’m unhappy!” doesn’t give much guidance in the way of what to prescribe: anti-depressants? anti-bipolar meds?

            “My kid’s not doing well in school,” “I’m not progressing at work,” a general sense of anxiety or malaise: these all give us a good sense of the magnitude of the problem, but not much hint as to the direction of the solution (or even the direction of the error).

            Even in simpler situations: “Our software is late and buggy” used to be taken as a signal to add more process steps. This was… an unreliable solution. Sometimes you need to simplify the process instead, so you can get running software sooner, and so get faster feedback.

          • drethelin says:

            Unhappiness is NOT a magnitude only sensor, what you’re suggesting is more like if we didn’t know if we were very happy or very sad, but that we were very something.

          • Peter says:

            The “proposed system” is something I actually built once for a bit of fun during the holidays – oddly enough, it would not have been trivial to add another sensor. Partly because I only had one (I’d bought it to try sensing obstacles, and only later realised it could work for lines), and partly because the microcontroller board (a micro:bit) I was using has an idiosyncratic connector. It exposes a ground and a 3V pin and three GPIOs, two of which were needed to control the motors (continuous rotation servos, BTW), leaving only one for sensing. Yes, there are more pins on the actual microcontroller chip, and there’s an add-on that exposes those pins in a useful way, but it adds extra bulk and I didn’t have one handy.

            Anyway, it works – as for whether the sensor provides the information that the controller needs, it depends what you mean by “needs”. “Needs” as in “needs in order to do well”, yes, it’s inadequate. “Needs” as in “needs in order to do the task at all”, no, it provides enough information to get by, admittedly in a slow and ungainly manner. Also the back-and-forth scanning turns were not good for the robot – the construction was shoddy and the back-and-forth shaking sometimes caused it to fall apart.

            My Dad said it was “like a blind man with a stick” and I think the comparison is apt. It wasn’t like a well-engineer system, more like as if someone had taken a well-engineered system (i.e. two sensors), damaged it (destroyed one), and expected it to cope in software. The coping method was something I’d programmed in, rather than it figuring it out on its own.

    • moridinamael says:

      One of the more mind-blowing parts of control theory is the level of complexity you can handle using relatively simple math. Particularly when you move into Laplace space, where you’re just adding together simple terms to represent complex physical systems, you start to be a lot less impressed with what your brain is doing.

      Also, the fact that chemical plants used to be controlled by “computers” built out of pressure valves before we had digital computers, because apparently a clever person can implement a PID controller using whatever is lying around.

      I don’t think there’s anything in-principle keeping deep artificial neural network from exhibiting internal meta-level adaptive controls on themselves. When these systems have any kind of internal “loops” in their dynamics rather than looking like a DAG, or if they have memory units like state-of-the-art NNs, the dynamics rapidly become a lot less predictable, even if they are still definitionally “deterministic”.

      • Null Hypothesis says:

        you start to be a lot less impressed with what your brain is doing.

        I think that’s the really important point. You should be impressed with the outcome. But the process itself isn’t really complicated at all.

        Neural networks are just doing pattern-matching, typically converging on better solutions by gradient decent. And we’ve elaborated on that simple strategy to do a lot of impressive things.

        But we still can’t do something like “make a machine set its own goals.” How do you instill motivation and goals into an AI? And how do you get it to do self-abstraction? Where it can conceptualize what it’s doing, and compartmentalize and recombine different things into a sequence to acomplish a task.

        The only way we have machines do that now, is if we handle the conceptualizing and sequencing for them. Translating that all into a pattern we can more easily quantify, and then hand it over to the pattern-matcher to pattern match. It’s still just moved the ‘root intelligence’ problem onto humans.

        Making an AI capable of training itself isn’t just a function of making its weights the output of its own feedback system. Were it only that simple. It’s a completely different question than the one we’ve been solving with them currently, and we’ll have to do something fundamentally different to get a ‘Generalized AI’ like the kinds people fantasize about.

        Consider this, we’ve gone from making computers win at chess, to winning at Go. But How well do you think you could train a neural network to play a Pokemon Game?

        • Machina ex Deus says:

          @Null Hypothesis:

          But we still can’t do something like “make a machine set its own goals.” How do you instill motivation and goals into an AI?

          But this is the simplest part! You just tell the AI “these are your goals, spend all your time and energy pursing them.”

          Just like my goals are set by me saying, “I want to lose weight” and “I want to be successful in my field” and so I spend all my time and energy…. on the SSC comments section. Hrm….

          Maybe a good exercise for each of us would be trying to figure out what our actual goals are, based on revealed preference or externally-observed behavior or something. Otherwise, we can set goals for ourselves all day and then not meet them when they conflict with existing, unacknowledged goals.

          This sounds like the kind of thing that would already be covered on LessWrong. Anyone have any pointers?

    • Garrett says:

      As someone who has a degree in engineering and spent way too much time doing LaPlace transforms in school, I wholeheartedly agree. As someone who dabbles in medicine in EMS, I’m disappointed that the doctors and the engineers don’t talk to each other more. There’s a lot that both fields could learn from each other.

      I’d note that the “open-loop” model given above (perform a task with your eyes closed) is important. One of the tricks we would like to be able to do with cell phones is to map our position with only a known starting position, plus movement as detected via the accelerometer. This would avoid the need for GPS usage in most cases. The problem is that the small errors detected get propagated forward through double-integration to the point that you’re lucky if an unmoved device doesn’t think it’s on the moon by the end of the day.

      It wouldn’t surprise me if the multiple-integration problem is something that starts to become a bigger and bigger problem when moving up the chain into the realm of consciousness. I don’t think we have enough information to know what’s going on. It’s one thing to say that the set point for eg. epinephrine release is too high in people with anxiety. But I don’t think we have enough information overall to know even what all of the receptors are to be able to figure out the system we are controlling.

      It gets even worse when we start talking about abstract concepts like “communism”.

      • Scott Alexander says:

        “It’s one thing to say that the set point for eg. epinephrine release is too high in people with anxiety.”

        Frick, this perspective explains drug tolerance and a bunch of psych problems, doesn’t it?

        • wintermute92 says:

          It really, really does. Neurotransmitter systems are precisely process control systems, far more obviously that neurons are. Coming from an engineering background, I’ve always been shocked that this metaphor isn’t common in psychopharmacology.

          Adenosine is released by staying awake (output), and creates a rising urge to sleep (feedback) as levels increase. Eventually you sleep (input change) and your adenosine levels drop (feedback change). Caffeine blocks adenosine binding, essentially inhibiting feedback to let your system run outside of standard parameters.

          But humans have meta-control systems that notice you’re outside of sleep parameters (high adenosine levels are the feedback here?) and correct the first-order process by up-regulating adenosine receptors. So now addiction is stability in a new state. If you quit caffeine you have to wait for the meta system to notice you’re too tired and down-regulate receptors – that’s withdrawal.

          That’s a simple example, I’m not trying to be insulting, but it’s interesting to go through all the specifics and realize that this covers all kinds of more complex issues.

          (This is part of why it’s so damn weird that SSRIs don’t work by addressing an existing serotonin imbalance. I’d love a good theory on what the hell they’re achieving in process terms?)

          • Neutrino says:

            New reader. Interested in your SSRI observation (…SSRIs don’t work by addressing an existing serotonin imbalance…). Where may I find out more?

          • Scott Alexander says:

            Adenosine is exactly the example I would have given just because it’s the one I already knew worked that way. Are there other things like that?

          • Null Hypothesis says:

            A lot of cases of Obesity (certainly the new ones that have cropped up in the last 40 years) are a good example of interacting control systems as well. Warning: gross oversimplifications ahead.

            You have basically two control systems, one regulating the levels of blood sugar so you don’t pass out or lose a foot, and another regulating your energy availability so you don’t starve.

            In addition to this you have a third system. An open-loop control if you will that says: “Crave literally every bit of sugar you can get your hands on.” because sugar in nature exists together with things we really like to have, like Vitamin C.

            This third system used to be constrained by something called ‘nature’ where we couldn’t spike our blood sugar no matter how many carrots and apples we ate. If you have room for 3 apples, you shouldn’t stop at 1. So if you have room for 3 donuts… you can see where the problem occurs.

            Being obese is really just a response to starving in-between meals. Constantly spiking blood sugar leads to insulin resistance. Elevated insulin levels lead to fat cells releasing lipids more slowly, at a rate insufficient to sustain you. So your body sends a whole bunch of signals to conserve and acquire more calories (you feel hungry, tired, sleepy, lower core body temperature, etc), which not only solves the short-term problem, but also leads to your fat cells taking up more lipid and getting larger. Larger cells release lipids at a higher rate. Getting fatter is the response to elevated insulin levels, so that the rate of lipid release in-between meals is again enough to sustain your body.

            Meanwhile, metabolic syndrome aside, People have their own natural ‘weights’ they like to be at. People can be healthy and sated, eating only when hungry, and have bodyfat percentages from 5% to 25%. This ‘natural’ amount of fat they carry around is an expression of the balance point their fat cells have reached with their metabolic demands.

            You can look at calories-in/calories-out from a thermodynamic standpoint. But that’s often an unhelpful perspective because both of those are moving targets. If your body feels like its starving, it’s going to fight you every step of the way. It will psychologically try to increase calories-in, and it will physiologically reduce calories-out. This is why a lot of people have success with low-carb diets.

            Ignoring the ‘cult of healthyness’ that surrounds every type of diet, if someone has made themselves fat by developing insulin resistance, the best way to get thin isn’t to starve themselves, but to reduce their insulin resistance. Their body will be more willing to tolerate a calorie deficit once it isn’t literally starving.

          • reasoned argumentation says:

            Are there other things like that?

            “Bro science” is all about controlling the body’s control system response to steroid doses – in contrast to testosterone replacement therapy where the medical community seems to just prescribe testosterone and ignore the feedback loops.

          • vV_Vv says:

            (This is part of why it’s so damn weird that SSRIs don’t work by addressing an existing serotonin imbalance. I’d love a good theory on what the hell they’re achieving in process terms?)

            My (un)educated guess is that SSRIs work by inducing some complicated synaptic changes.

            Possibly in clinically depressed people there are some serotonergic pathways that are always saturated, therefore they don’t compute anything useful and/or the brain areas that compute intrinsic rewards have learned to disregard them.

            SSRIs might make serotonin levels more uniform across the brain: they don’t cause a large increase of the total amount of serotonin because of some fast feedback mechanism (unlike SRAs like amphetamines which override the feedback mechanism and cause an immediate serotonin high) but possibly they desaturate these disfunctional pathways, which over time learn again to compute useful information about your mood, and the intrinsic reward modules learn again to take this information into account.

            This might explain the delay in the onset of SSRIs therapeutic effects, and why SSRIs work at all despite the inconsistent link between total serotonin level and depression.

            Anyway, this is theory that I just made up as an armchair Wikipedia neuroscientist, therefore it is probably wrong. 🙂

        • aakumar says:

          I studied biomedical engineering, and control loops were one of the most common metaphors used for describing biological phenomenon; in a little surprised that this isn’t common everywhere, but it might be a function of half my professors having studied other engineering disciplines prior to biomed.

          Basically, anywhere you read homeostasis in a medical textbook, you can substitute control theory, though frequently biological systems are complex enough to make analysis difficult. Examples include all sorts of hormonal phenomenon, such as menstruation, which is essentially a loop that keeps repeating until a signal for pregnancy is detected. I also remember it being an analogy used for autonomic regulation of breathing and heart rate, where the signal of blood oxygen content (and CO2 content) was used to speed up or slow down breathing.

          One super interesting graduate-level class that I audited was entirely focused on genetic applications of control theory and was taught by a professor from the computer science department. If you google “synthetic biology” or “genetic circuits”, a lot of the actual research (aside from media hype) is designing a genetic circuit/control loop using on the computer and then creating an in vivo model through bacterial plasmids. One memorable example from the class was a “clock”; three interacting genes produce a system with stable oscillations that can make bacteria containing the plasmid glow fluorescent once per cycle.

          • Richard Kennaway says:

            Why do you call these metaphors and analogies? Surely control systems are exactly what these biological systems are, not what they are like.

    • Ghatanathoah says:

      AI won’t ever ‘go rogue’ the way we’re doing it now, or are ever likely to do in the near future, because those systems aren’t doing the meta-level adaptive controls on themselves. They’re deterministic, statistical systems being managed by a control system.

      I recently read about a newly developed AI called a “Differentiable Neural Computer” that seems to be a step in the direction of “doing meta-level adaptive controls on themselves.” Relevant text from the article:

      A computer that could generalize between learned activities would fundamentally alter the intelligence landscape, conceivably igniting the kind of “hard takeoff” scenario espoused by Nick Bostrom in his seminal book Superintelligence: Paths, Dangers, Strategies. In a hard takeoff scenario, a self-improving AI recursively augments its learning ability to the point where humans no longer really pose any competition. An AI that can generalize between learned activities could use its vast storehouse of learned models to attack any new activity with a level of sophistication only dreamed of by humans.

      We shouldn’t be surprised the source of this breakthrough was made by the folks at DeepMind, a London-based AI firm responsible for AlphaGo, the go playing supercomputer. Writing for the journal Nature last week, the team described the underlying theory behind the new AI,, which they have dubbed a Differential Neural Computer (DNC). It relies upon a high throughput external memory device to store previously learned models, combined with a system for generating new neural networks based upon the archived models.

      And here’s the developer’s page on it.

      Maybe I’m not understanding you or them correctly, but the process the DNC uses to generalize learning sounds a lot like what you describe when you say:

      It’s one thing to take a known system and make a sufficient control system for it through analysis. It’s quite another to make a meta-control system where it improves the control system that’s already running. You’ve got two or more control loops running together, estimating the system based on how your current control system causes changes to the result, and another adjusting the control system itself to improve the results.

    • Symmetric says:

      Thanks for the insightful overview from a control theory perspective. I don’t know enough about current models of brain systems to know what the alternatives are, but coming from a software engineering background the brain-as-control-loop model seems an obvious good fit to me as well.

      The old behaviourist model is cited in the OP is one alternative interpretation, but I’d be interested to know a contemporary model of brain systems that disagrees with the control-loop model presented here. I know a few neuroscientists and I believe they would agree with the general hierarchical model presented here, though they probably wouldn’t use the specific cybernetic/control theory terminology.

      A side-note on your side note “…this is also why I roll my eyes at the ‘dangerous AI’ stuff”, I think that most AI-risk folks would agree that as long as AIs “aren’t doing the meta-level adaptive controls on themselves”, there is little risk. As I understand the argument, the risk comes when the AI is given the ability to adapt at the meta-level; that would be required to truly call it a “General Intelligence”. (Perhaps you’re referring to an AI risk argument that I’m not familiar with though).

    • Michael Arc says:

      I’m looking to hire a control systems engineer in the next month for a manufacturing job in Brooklyn Naval Yards. Competitive salary, equity, exciting projects.

      Michael.vassar@gmail.com

    • Scott Alexander says:

      Thanks, this is helpful.

      Do you get the impression that neuroscientists already know this sort of thing but don’t think it’s a fruitful avenue for further investigation, or do they not know anything about it at all?

      Related question: do AI researchers specifically believe what they’re doing involves control systems and use the science of control systems in their work? I’m not talking about robot locomotion stuff, but stuff like AlphaGo or Watson?

      • Null Hypothesis says:

        My guess would be that the fruitfulness (fruitility?) is limited.

        To the first, that let’s me expound a bit on what I didn’t say well the first time. You can put a blanket-title of ‘Control System’ on anything and everything with a feedback loop as a conceptual point. But when you narrow that definition down to where engineers actually have a field called ‘control theory’, then you’re talking about a set number of tools and concepts for analyzing the dynamics of system responses. Which is very easy to do for a single variable within an approximately linear system.

        But when you start trying to control multiple states simultaneously, in a non-linear system, you can quickly see how the complexity grows exponentially, and the very concept of ‘optimal’ becomes fuzzy. Engineers addressing these systems often approximate and compartmentalize different parameters and goals to try and make it more manageable.

        When you get to trying to manage biology, sometimes we have very straightforward control mechanisms, where certain things mostly only affect certain other things, in proportional amounts, and thus models can be simplified and tracked.

        When you get to the point of neurotransmitters, chemical imbalances can be analyzed like this. How are the chemical levels controlled? Where are they balancing around? Is that too low or high? Is that causing problems in the brain? etc. And you can perturb different transmitters and study how chemical levels change (preferably in something like mice rather than humans) and see if you get any consistent responses out. Though the interconnected nature of chemistry can make this difficult.

        As far as using it to address any ‘psychological’ problems, conceptually it’s great way to frame things. But at the same time it’s often just going to be ad-hoc or post-hoc rationalizing or explaining of what’s going on. For describing a bad driver, I could say that they have too low of a derivative control, and tend to oscillate around their goal. If they have very jerky movements I could say they’re using too much gain. I can describe the problems the control system has, were it a control system driving and exhibiting these same faults. But how exactly does that translate to making them a better driver? I can tell them to be less forceful with their actions, perhaps. (“Ease up there, lead-foot.”) But that’s a no-duh. The metaphorical description doesn’t lead to new insights or solutions.

        On the AI question, for those kinds of examples they’re definitely not. But largely AlphaGo and Watson aren’t the kinds of AI research I’m talking about. On one level you have people usingtools like neural networks and backpropegation to try and acomplish things. That’s what AlphaGo did. It took a lot of careful structuring or the system, and then let it run Monte Carlo Simulations on Go games against itself until it Got Good.

        Here is a great article written on how AlphaGo did what it did, written by a PhD in machine learning. You’ll find the discussion is entirely related to the structuring and training organization of the learning agents. Not how the agents themselves modified their values (back-propagation). The Architect is concerned with how to arrange all the building materials. Not how to cut the wood and smelt the steel.

        They’re certainly not doing anything with control systems. They are just tuning the parameters of the tools they’re using, tailoring the structure and the inputs to fit their specific problem. (And by ‘just’ I mean just doing something so complex and open-ended there’s an entire field of study devoted to it)

        That’s an application of a learning agents. When I refer to AI and Control systems having large overlap that’s probably unrealized, I’m talking about designing learning agents. How does a neural net learn? Give me 30 neurons and I can make an agent that evaluates breast biopsies for cancer, or estimates the heating and cooling load given a bunch of building parameters. I can make it identify what region of the country I’m in by feeding it power from the outlet and identifying which power grid I’m on.

        All these different tasks are possible with the same neurons. The only thing different is the strength of the weights that connect the neurons. The set of all possible weights contains within it the solutions to trillions and trillions of conceptual problems. How do we find the weights that fit our problem? The question of designing a learning agent is “what algorithm do you use to adjust the weights?”. This is where the feedback for learning is taking place. How do you translate an error in your desired output, to exact amounts you adjust your weights by to get a better result? How do you do it rapidly and efficiently? How do you avoid local optimums – local places of stability?

        I don’t see any deep Controls Analysis of those systems. I see AI researchers designing distributed agents that are simple, but guaranteed to converge. The mathematics are very similar. I just feel there’s a lot of Fruit that’s been ripening on the tree on Modern Control Theory for 40 years, that AI researchers might want to go and pick.

      • Kaj Sotala says:

        On neuroscience applying control theory ideas, see here for a bit of discussion about an excellent paper that does it, or here for the paper itself.

      • Enkidum says:

        Neuroscientists know about “this sort of thing”, but I don’t know that most of us call it “control systems”. But the idea of adjusting some values on the basis of the difference between input and desired state in a continuously loop is very common in areas as diverse as, say, visual processing and movement.

      • vV_Vv says:

        I’m not talking about robot locomotion stuff, but stuff like AlphaGo or Watson?

        I don’t think that AlphaGo or Watson really fit this control system framework in any non-trivial way.

        However, various ML labs, including DeepMind, are working on types of hierarchical reinforcement learning systems where a high-level RL agent sets goals and intrinsic reward signals for a low-level RL agent that in turn generates low-level actions (see this recent paper for instance).
        I’d say that this fits well with PCT, or at least with the lowest levels of its hierarchy. But as far as I know, nobody seems interested in using eight or nine levels of control hierarchy as a realistic AI design (at least now, but then neural networks used to have only two or three layers up until ~5 years ago, now they can easily have tens to hundreds).

        Anyway, it seems to me that the higher you go in the layers of the hierarchical control abstraction, the less precise and fruitful the framework becomes.

        You could argue that a corporation is a hierarchical control system: the share holders control the CEO who controls the board, which controls the higher managment and so on up to the janitor who mops the floor. But you can’t fruitfully apply the standard theory and practice of control engineering to business organization, or at least I don’t think anybody successfully did it. If you squint hard enough you can recognize some familiar patterns: over-corrections, oscillations, etc. Maybe the current theories aren’t general enough to capture the relevant aspects of these disparate systems, but so far it has been difficult to formalize these intuitions and it’s not obvious to me that it could be done eventually.

      • whateverthisistupd says:

        It’s a case of oversimplifying. There isn’t a “control” system for communism. You have a system that sort of gages what is considered fair behavior within your group (a lot of which is still running on hunter gatherer assumptions and doesn’t scale terribly well),you have higher level cognitive functions that allow for analyzing information gathered by these lower level”control systems, you have stuff like your personal ego, and it interacts to form complex behavior and ideas.

        Also worth noting, in the brain, behavioral directed “control systems” aren’t just trying to maintain a homeostasis exactly, they’re trying to get you to “do things” so yes, things like hunger are your system saying I need nourishment, here is motivation to find food, but in the case of being overweight, other systems can trigger your higher functions to will you to ignore those systems to a degree and make changes.

        In other words, it’s not that’s it’s not true, it’s that the simple model originally proposed isn’t true.

        Seeing the link to the lesswrong post criticizing the idea, it seemed a little confused. While THAT specific model was demonstrably false, it doesn’t mean the basic idea is wrong. But I suspect creating a model would be much more difficult then those earlier attempts. Nothing in the brain is that neat.

    • wintermute92 says:

      This is a really great summary, thank you. I’m sad I never took process controls in school, and I probably ought to find myself a good textbook or online course. (Especially since I’m one of those machine learning types who’s aware of the similarity, but not enough to act on it.)

      Two interesting points come to mind.

      First, your note about overthinking mechanical tasks (and really the whole piece) lines up very nicely with the “stages of competence” model for learning, where “conscious competence” is a step worse than “unconscious competence”. Lots of people get very excited about this as a theory of ‘choking’, where experts under pressure let their conscious thoughts get too involved in a task and screw it up. But the process control viewpoint makes this both blindingly obvious and more accurate! It’s not just some handwaved “what if we revert to thinking too hard”, it’s a clear and predictable result of bypassing a control loop to directly manage some low-level operation like throwing.

      Second: I agree with your AI point for basic optimizers. First-order control AI can be dangerous in the “bad parameter choices got you killed” sense, but not so much in the “unexpected behavior from the control system” sense. Crashing cars and markets, not Terminator.

      But I think we might be hitting that AI/controls divide: I know people who are already working on (basically) meta-control systems that can adjust their own targets/reward functions. They’re behind first-order systems, but not hopeless. Do you have any thoughts on how much the ‘go rogue’ scenario worries you for projects where AI is doing meta-control work on itself?

    • Controls Freak says:

      Another Control Engineer here (also worked a fair amount on neuromorphic computing/control). My degrees are in aerospace, though, not EE.

      I almost can’t disagree more. It’s insanely easy to look at things, reason backwards, and impose our tools upon them. For the brain, we’ve done it with hydraulics, mechanical devices, and the telegraph. I recently was part of a large research program formulation meeting involving academic and government researchers in neuroscience, robotics, control, and learning. One of the learning guys presented his “everything is cost functions, guize” spiel, and the neuroscientists roasted him. (He may still get funded, because a fair amount of what our org cares about is doing useful things rather than actually doing the fundamental science of biology.) I can just as easily imagine a bunch of optimized cost functions (could even invoke the discipline “optimal control”), but there’s very little reason to think that it’s any better (or worse) of an extrapolation than gobs of simple little loops.

      A neural net is a bunch of additive and supressive signals sent together, and then sent back. Anytime you have feedback, you have a feedback system. And thus a control system. Computer neural-nets work using markov chains and statistics and backpropegation and two dozen other methods

      This is yet another great example. Most modern ANNs have almost no relationship to biological neurons, and I regularly have to reject papers or demand lots of revisions in order to get authors to walk back their claims about how they’re going to use an abstraction (that was devised before even Hodgkin-Huxley!) in order to tell us something about actual biological networks. We’re poorly imposing our own structures onto biological systems.

      One of my favorite lines from a paper of all time is the closing line of Rossignol’s review paper. They were focused on low-level feedback for locomotion. Turns out, this is not trivial. There are multiple different mechanisms that are super interdependent, highly non-linear, incredibly fast timescale, and very state-dependent. He wrote:

      The more we dig into the details of these sensorimotor interactions, the more it seems improbable that they should work so smoothly, but they do. [mic drop added]

      I take all my massive abstractions of neural nets with the same type of salt Scott was using for the pharmacogenomics post.

      Open-loop control is closing your eyes and going through an exact procedure, and hope your outcome will be what you want it.

      The thing is – we do this all the time. We have pretty good direct evidence of central pattern generators for locomotion in fish and indirect evidence in mammals. Many of these are primarily open-loop, perhaps with some closed-loop subsystems.

      If your reaction time, or muscle control precision is lower, you cannot guarantee 99% that you can surpress a deviation when a problem arises.

      Another example of something that looks open-loop is a baseball batter. They can’t reliably implement a closed-loop controller on the necessary timescale. If anything, we have to model it as a feedforward, open-loop controller. We’re starting to see our comprehensive theory breaks down around the edges. I think an important question in robotics is to construct a theory for how to determine tasks that are manageable through open-loop control (and which provide suitable performance benefits). Nevertheless, at some point, if we’re saying, “Oh, well, all of biology is either closed-loop or open-loop control,” I can probably squint real hard and stuff biology into the framework (because, uhhh, I can imagine almost anything is a signal), but it’s not very helpful for any actual research or predictions.

      Any time you have feedback, you can talk about and analyze it in the context of a ‘control system’. That’s a broad definition, which is why it works so well. But at the same time it’s still a very specific thing with specific attributes that result in robustly controlling values in a very noisy and uncertain world. Anything you see in your life that manages to stay a certain way – like say your your body temperature, is doing so in a very uncertain, changing, chaotic world. There’s no way it could remain where it is, without feedback. And life is the definition of maintaining a certain state – a certain pattern of information, in differing environments.

      Whenever I meet a particular prof at program reviews, conferences, or other meetings, we always revisit an interesting question he’s proposed. He was looking at the behavior of a particular bird while feeding from a flower. When moving the flower/feeder around, he was able to get something that kinda looked like a linear response… and the multiple modes of sensing interacted kinda like a combination of linear systems. He’s basically posed the question, “Is it weird that we have this complicated mess of highly interconnected, highly non-linear components that seem to result in a somewhat linear-looking system?” The paired question, though, is, “If it is weird… and if biology is in some fashion ‘doing’ control theory… how?!” I mean, not to make fun of Cost Function Guy again, but are we really postulating that at some point in evolutionary history, we developed a feedback loop or a cost function that says, “I want thingBelowMe to be stable, be within a linear regime, and have a gain margin of X”? That seems utterly absurd to me!

      Instead, I think that our ability to create abstract models is good… almost too good. Good enough to fool ourselves, at least. Like you said, everything stable almost certainly involves some feedback, but that doesn’t mean that something is fundamentally “doing control theory”. It just means that lots of things can be modeled as stable systems. But even that is tenuous! My favorite question on that point is whether or not Jupiter’s Red Spot is stable. On one abstraction, it’s extremely unstable! It’s a tumultuous storm! (…now I’m kind of curious whether anyone has run FTLE on it…) But on another abstraction, it’s one of the most stable features in the solar system! (Sidenote: my favorite paper on how we can equivocate on types of stability is “Asymptotic stability equals exponential stability, and ISS equals finite energy gain — if you twist your eyes“.) Sure, if we find the right pieces, we can likely come up with some feedback mechanism that is producing this… but are we really saying that Jupiter’s atmosphere is “doing control theory” in the way that even remotely resembles what we’re trying to talk about when we talk about the brain?

      If we’re just saying, “Animals seem stable in lots of ways,” then sure. Do many of those systems exploit feedback? Absolutely. On the other hand, animals are unstable in lots of other ways.

      consider how birds fly through forests. There are lots of tree branches in the way. How do they fly so fast and know the perfect path ahead of time? They don’t. They are just playing a real-life version of those path-scrolling games. They see an object and they dodge it. And what they do, is look at roughly how dense the branches are. When it’s less dense, they go faster, not because they see a path, but because they have strong confidence that there will be a path and furthermore that they’ll be able to adjust their course in time to make the openings. If the branches are more dense, they lower their flight speed because they can no longer maneuver quickly enough to dodge through the gaps 99% of the time.

      I agree that they are remarkably capable, and I’ve been part of individual research efforts and major programs to understand various things they do and ways we can gain similar capabilities. However, one thing that comes up is that they fail way more often than you give them credit. Animals do stupid things all the time. They run into stuff all the time. (Check out the StoppedWorking and AnimalsFailing subreddits.) We do stupid things all the time. Inevitably, in one of these rooms with all those researchers, someone talks about how we should think about failure tolerance and robustness. The ability to fall, run into a door, or whatever, and not be that hurt. Sure, we can kludge on, “But but but, there’s another control system or another cost function or another X that means we reduce the number of falls!” At some point, it really starts sounding like unfalsifiable overfitting of a model to the data rather than anything really illuminating.

      I think at best, we’re saying, “Things are dynamical systems (duh). Some of them will be stable. When you see something that is stable, you can probably find something that looks like feedback.” So, to the extent that we know enough of the dynamics (to know that it’s stable, unstable, or even that it contains a bifurction), we can certainly exploit that knowledge. I think it’s a stretch to say that essentially everything (including planetary atmospheres) are “doing control theory”.

      Finally, I would take a massive grain of salt with scaling it up to complicated processes like economics or political systems. Are they dynamical systems? Absolutely. Heck, Plato postulated that various forms of government were unstable… and then went on to postulate a “state machine” (we could squint and call it a control loop… or we could call it a limit cycle, for that matter). Is that a good model of the dynamics? I don’t really know. Is it helpful to imagine that society (or even individuals in that society) is “doing control theory”? Not really. It’s way more important to actually nail down the dynamics. Then, it’s possible we might find some feedback pathways (and if we’re lucky, they might even be meaningful!).

      TL;DR Sometimes a control theory model is useful; sometimes, we should slow down. (Edit: Also, many people confuse “is a dynamical system” for “is a control system”.)

      • Controls Freak says:

        I’d like to add a couple examples that came to mind, which should caution us against saying things like, “That just seems unstable, so there has to be some sort of active control going on.” The first is cockroach walking. At first glance, you might think, “Man, that’s a complicated problem. You have to precisely place individual feet in particular locations, respond to normal forces, ensure that your center of mass is located such that not only will you not fall down/tip over, but that your muscles/joints are such that you can continue to locomote forward, and so on and so on.” However, IIRC, one of the things that Guckenheimer and Holmes did (after becoming giants in the field already, but before turning to lamprey eels) was show that you can do a shocking amount of hexapod walking in open loop! (I’d have to search for this; it’s been a while since I remember hearing a talk on it.)

        Another example that I remember being presented more recently is gaze stabilization for birds in flight. The natural thing we would do (and what people do in practice for all the quadrotors you see) is put a big gimbal on it, do a bunch of fancy state estimation (both of the vehicle and using optical flow), and actively control the gimbal to keep it steady. From observing birds, we might say, “Wow! Their body moves around a lot, but their head remains remarkably still! Of course there’s a feedback controller in there which accomplishes all that beauty!” Unfortunately, it turns out… you can do a lot with passive stabilization, and there’s a decent chance birds do.

        Again, “dynamic systems”, not “control systems”. Sometimes, animals and human brains do close loops, but we have to be very careful about claiming it.

        • benquo says:

          This was pretty surprising until I realized that “giants” was a metaphor, so there was no intended parallelism between “becoming” and “turning to”:

          after becoming giants in the field already, but before turning to lamprey eels

        • 27chaos says:

          I’m not sure I’m interpreting the jargon correctly. In what sense, if any, is an open loop a loop? It seems like it’s not a loop, to me.

          • Controls Freak says:

            That’s… a good point. It’s definitely been in my box o’ jargon long enough that I honestly didn’t even think about how weird it is. I might spend the rest of the day hunting down the first use of “open loop” in the literature. It’s kinda work-related…

          • Richard Kennaway says:

            An open loop is not a loop, in exactly the same way that a broken bottle is not a bottle.

          • tardx says:

            Closed loop control systems can be analyzed in terms of their transfer functions by looking at an’open loop’ equivalent (meaning a system with no feedback), and by using a ‘Nyquist plot’ of the open loop frequency response, for example to determine their stability. So you’re right: it’s not a loop, it’s a term of art.
            As for the origin, I don’t know if Nyquist used the term in his paper of 1932, but I have a 1970 textbook that uses it, so sometime in the preceding 38 years…

      • whateverthisistupd says:

        For those of us ignoramuses, could you expound a little more on closed vs open loops? I’m getting the sense you’re not just talking about negative vs positive feedback?

        • obserience says:

          TLDR:Closed loop control means that there is feedback. Open loop systems don’t have any idea what’s happening at the other end. They just vary the input values according to the desired output value.

          Consider an electric pump. When power is applied, it starts up and starts pumping water. The thing we want to control is flow rate.

          The simplest control system is no system at all, just an on off switch. When turned on, the pump starts up, reaches a roughly constant speed and starts pumping. We have no control over flow rate (just an on off switch) and if something changes (temperature of the liquid, resistance of the piping) the flow rate could change too.

          Control engineering is about building control systems that let us change the set point (flow rate) and that respond to external disturbances to maintain that set point.

          Open loop control is used only to control the set point approximately. For example, a variable power supply could be put in. By twisting the knob we can control the power applied to the pump and change the flow rate. We could build a table of knob positions and flow rate values and then program a control system to set the knob to the correct position for the desired flow rate. Notice that the system won’t maintain that flow rate in the presence of disturbances. If we close a valve in the downstream piping and the resistance to flow increases, the flow rate would decrease.

          Closed loop control adds feedback. A flow sensor could be put in just after the pump and a control system would use readings from that sensor to decide where to set the power supply knob so the desired flow rate is maintained. Now, even if some external factor changes, the control system can compensate to keep the flow rate constant.

    • This is a masterful comment, may I ask you a related question?

      When I worked in academic financial modeling, we used a Kalman Filter to estimate a state space that explained a series of debt instruments. Learning how to set up the filter and run it was beautiful, but it was this very challenging set of recursive dynamics that required a very smart human to understand and construct.

      My knowledge of ML is less advanced, but from what I have read Neural Net stuff can approximate Kalman Filter type dynamics, but within the black box of neurons, not revealing the Kalman Filter dynamics to the human observer.

      What do you think the next step of combining these two paradigms is? What would it even mean to combine a neural net and a Kalman Filter in a way that improves our filtering abilities?

      • Null Hypothesis says:

        It’s funny you mention the Kalman Filter.

        It was developed in the late 50’s, and was further elaborated on to get us to the Moon. Since then tons of additions or modifications have been made based on the problem it’s being applied to. When dealing with controlling a system, the Kalman Filter sits right alongside the Control system, and they feed off of each other. Especially so in Multivariate and Adaptive systems. Describing those systems is a bit more difficult, and involve those ‘complicated dynamics’ you’re talking about. But the actual functionality of it is quite simple and readily understandable. It sounds like you already understand them, but for other readers, I’ll cover the concept.

        Principle Guidelines of Bayseian Inference: When you infer based on two noisy measurements, your estimation will be less noisy than either. Put another way, it means that any data, no matter how noisy, is helpful. The less noisy, the more helpful.

        So you start off with a guess of your initial state(k=1). Might as well guess 0 for everything, because you have no information to go on. Likewise you set the variance of your noise to be huge (9999…), practically infinite, so a bell curve from it looks like a uniform distribution across your domain. You have zero information, this is just a starting point.

        Then you take a measurement of your state. That measurement is now your state estimation, and the noise of your sensors is now the noise of your estimation.

        Then you take this state estimate, and guess what your state will be at the next time step (1us, 1s, 1hour, or 1 week from now at k=2) based on some model you have of your system. And when you’re predicting the future, there is uncertainty involved. That’s reflected in something called the ‘Process Noise’. So your estimation is so good for your current state, and then when you forward-predict your next state, that estimation is always going to have higher variance (be worse). But the variance won’t be infinite.

        Come your next time-step(k=2), you take a new measurement of your state. In addition to these measurements, you also have your prediction based on the last state(k=1), which has its own noise. You then combine these measurements, finding some compromise based on their weights, and now you have your state estimate for your second time step.

        Here’s the key part. The estimate for your second time-step (k=2) is going to be better than the estimate for your first time step (k=1). The first step was exactly as noisy as your sensors. The state at the second time-step was estimated using both your sensors, and some other noisy estimate. According to Bayes, your second state estimate is now lessnoisy than your sensors.

        Then you estimate the state at your third time-step in the same manner, using the same sensor measurements, and using a prediction of the state based on the 2nd state estimate. The 2nd state estimate was better than the first state estimate, so the prediction of the third state will be less noisy than the prediction of the 2nd state was. So that means that your overall 3rd state estimate is going to have even lower variance.

        Continue ad nausium. As time goes on (assuming your sensor’s reliability is constant) each new state estimate is better than the last. This isn’t unbounded however – it’s asymptotic. Eventually you’ll reach a steady-state, where the certainty lost by forward-predicting is perfectly regained by the certainty regained from the next measurement. Where the sensor measurement cancels out the process noise.

        Incidentally, if you start off with perfect knowledge of the state, the reliability of your future state estimations will actually degrade down towards that same asymptote. Because even if you start off perfectly, you can’t predict the future perfectly unless your model is also perfect. The imperfection of your model (the uncertainty of the system or external influences) is encapsulated in the process noise. So whatever side you start on, your certainty will always tend towards a specific value based on your measurement noise and process noise. Hey look! More negative feedback!

        The basic point is that the Kalman Filter is the optimal least-squares estimation of a state given all your information (your history of sensor measurements) but it doesn’t require you to keep using all your old measurements in your calculation. You just need your current sensor readings, and your most recent state estimation. You’re encapsulating all your previous measurements into your previous estimate, and a record of the noise of that measurement.

        We refer to this kind of system as ‘Markovian’ when the next state relies only on the previous state. Or put another way, knowing the previous two states provides you no more information about your current state than just knowing the previous state.

        ============
        Now I go through all of this because of two reasons.
        1)Kalman filters are easy to understand conceptually, and I hope it’s been made a little more clear for other readers
        2) To better answer your next two questions.

        Can Neural Nets Approximate a full Kalman Filters’ functionality? Yes they can. All a Kalman Filter is, is a method of successive-approximation. Mathematically it’s the least-squares optimal estimation, but there are other methods you can use that will give suitable results. A Neural Net Replicating that behavior isn’t surprising, provided it has some sort of capacity for feed-back from previous outputs.

        However, a neural net will never perform better than a Kalman Filter for the same computational complexity. A neural network is a bunch of matrix multiplication – and so is a Kalman Filter. But at every step, the neurons will perform some sort of nonlinear summation (threshold, sigmoid, arctangent, etc) and then propagate those outputs to the next level. Neural nets can only do one kind of operation, and they have to adjust weights to approximate other operations using that one operation. Looking at it from a sort of conservation-of-information standpoint, the equivalent calculations that the Kalman Filter does have to be done, one way or another, by the neural net. It’s like building an entire computer chip using nothing but NAND gates. It’s possible, but you’re going to use more transistors than if you used straight OR,NOR,AND,NOT,XOR gates. As you say, it’s likely to perform a bunch of calculations that eventually do approximate a succesive-approximation system, but can’t be compartmentalized or broken down into separate conceptual stages.

        What would combining these things look like?
        To a degree, I couldn’t tell you. Combining them in any meaningful manner would probably cover dozens of Ph.D. Theses. When I spent some time playing building the learners from scratch, or playing with the tools to get them to learn things, it became very apparent that similar kinds of calculations were being used to tune the systems. However the AI side wasn’t using a lot of the more developed direct-solvers multivariate controls people use. Control Theory – the Academic Discipline, involves a lot of very complicated mathematics and proofs that basically verify that you’re going to converge towards solutions, or that you’re going to get an optimal estimate for states, or you’re going to achieve certain control bounds on so many of your goals.

        And I don’t blame them, because as interesting as all of that was to learn, it’d be hell to try and adapt to something else. Modern Control Theory has basically been iterating on the same questions for 40 years. It’d be deceptive to call it low-hanging, but there’s a lot of fruit there to harvested and repurposed. I think there are probably some powerful weight-adjusting algorithms that could be fashioned out of the tools of Control Theory. And Hybrid systems, where learning agents are used to approximate unknown systems as part of the feedback loop, might show some promise. As I said above, a Neural Net can’t do better than a Kalman Filter, but that’s provided that the Kalman Filter has an equal or model of the system. If a neural network can learn a better model, that would basically translate to less process error, and thus a lower bound on the estimation certainties.

        • Thanks for the thoughtful reply, this is a question I have wanted to read the answer to for about a year, and I’m glad I found someone with the ability and kindness to answer it 🙂

    • Bugmaster says:

      As far as I understand, Scott et al are afraid of a poorly written control system: one that would contain a positive feedback loop. If this system was controlling a Boston Dynamics robot, it would immediately accelerate to its top speed and crash into a wall. As far as I can tell, proponents of the Singularity believe that, when it comes to problem-solving techniques, there is no “top speed” and no “wall”; the system will keep accelerating indefinitely until it consumes the entire Universe.

      I personally disagree with this view, but I also do not think that your argument against it is sound.

    • grreat says:

      The idea that our brain runs on control systems goes through an almost digital shift from ‘so painfully obvious as to be beyond mentioning’ to, as you say, ‘abstract, unprovable, and unfalsifiable’ in an instant, once you get to the high-level stuff.

      But… why is there that digital, sharp shift? Because no one knows what individuals optimize for past moving from point A to point B?

      My theory is that humans behave in a kind of local optimum search on the high level. I’m not sure what these optimum parameters are but they seem to include social status and procreation access (probably evolutionary goals). Groups get stuck in a local optimum trap and only move to a globally higher optimum if they are kept from the local optimum by the defensive behavior of those occupying the optimum or an energy disrupts the entire local optimum. Analogous to the computer problem of local optimum traps and optimum searches.

      Perhaps the control system at the top is the optimum search for procreative ability.

      • whateverthisistupd says:

        I think this is the wrong way to look at it. Why does there need to be a single top system?

    • sidereal says:

      Machine learning in the past few years are often solving problems Adaptive Controls did 30 years ago, in a much more complicated fashion.

      Can you recommend a textbook that would cover some of the solutions machine learning is reinventing?

    • ksvanhorn says:

      > Machine learning in the past few years are often solving problems Adaptive Controls did 30 years ago, in a much more complicated fashion.

      I would love to read an article elaborating on this sentence. Null Hypothesis, if you’re not up to writing that article, can you suggest some pointers?

  6. dazuck says:

    Thanks for the review, just ordered the book so perhaps I’ll be able to address this myself soon.

    Does applying control systems to higher tiers fail because it can’t work for more abstract concepts, or just because something like ‘communism’ is the wrong type of abstract concept to work up to? The 9th tier systems could be fundamental human motivations that standalone, while concepts like ‘communism’ are not actually a tier but rather ideologies that tie those systems together when they come into competition.

    For example, say 9th tier systems include our desire for liberty, for equal/fair treatment, for security of our person and property, for recognition/status among peers, etc. In your example, the man would be torn about how hard to work at his hammering job because he is conflicted not between ‘communism’ and ‘family values’ but rather between the status he gains from being more productive and equality served by doing exactly his role and not overreaching. Communism and other ideologies go across these systems, and arise precisely in order to handle the situations where systems come into conflict.

    So we may all share the same systems at the top – the same fundamental motivations we are constantly controlling for. But a communist would differ from a libertarian in how to weight the importance of each system and to handle conflicts.

    • whateverthisistupd says:

      I think the error here is assuming a straight heirarchy of systems, as opposed to, further up the hierarchy, the more the systems become entangled and pull us (or synergistically push us) in different directions, leading to more complex emergent behavior and the cortex’s ability to “think” about all the complex stuff being processed.

  7. asimpleplan says:

    Ary Goldberger has some interesting work on related issues – my sketchy understanding of it is that he hypothesizes that when the body is working properly it exercises control over muscles that is complex in the technical mathematical sense of the word. He tries to exploit this for early detection of neurological disorders affecting movement by having subjects complete a tracing task and looking at the complexity properties of the error, rather than the magnitude. See https://academic.oup.com/biomedgerontology/article/68/8/938/547929/Use-of-a-Tracing-Task-to-Assess-Visuomotor; I believe it references much of the other related work.

  8. meltedcheesefondue says:

    >And I don’t understand those times – why Behaviorism ever seemed attractive is a mystery to me, maybe requiring more backwards-reading than I can manage right now.

    I’d always thought that behaviourism was a reaction to things like Freudianism.

    • Enkidum says:

      Methodological behaviourism, in which you can only draw conclusions from quantifiable evidence (behaviour of some kind), is now the gold standard of psychological research, and has been since around World War I. It was a very important reaction to, as you say, Freudianism, and Intuitionism, etc. However, probably largely as a result of some slightly weird personality quirks of the people involved, this developed in parallel with a rigid insistence that these experimental limitations were fundamental truths about the mind, and an incredibly silly view of what the brain does.

      • whateverthisistupd says:

        well, it’s not uncommon that when a scientist creates a revolutionary new paradigm, they think it’s the “full answer”.

  9. Douglas Knight says:

    Why do all other reviews emphasize the title and the catch-phrase “behavior is the control of perception” but you ignored it? Does that sentence mean nothing? Then why are all other reviewers so enthusiastic about it?

    • Scott Alexander says:

      I don’t understand the details of the control theory involved very well. I get the impression that Powers thinks it’s super-important that, contra the Behaviorists who think perception leads to behavior, in fact behavior is a control system controlling perception (eg my perception of how far the car in front of me is). Probably if you are mathy and you want to design a control system this matters a lot. To me it didn’t seem like a super-interesting point, especially since I wasn’t starting from a Behaviorist standpoint anyway.

      • David Condon says:

        “contra the Behaviorists who think perception leads to behavior”

        No, behaviorists believe changes in the environment predict changes in behavior. A behaviorist wouldn’t normally use a term like perception.

        • Richard Kennaway says:

          A behaviourist would use the word “stimulus” and say that stimuli produce behaviour (or “responses”).

        • 27chaos says:

          Huh, I read Behaviorists as Buddhists my first time through.

  10. Mengsk says:

    The control system seems like a an abstract model for any sort of goal directed action. Does the thing I’m doing move me towards a goal? Up-regulate. Does it move me away from a goal? Down regulate, and try something else.

    In fact, I’m having trouble imagining a system that could produce goal-directed action without something like a control system (e.g. something that stop the system from behaving in ways that don’t advance its goal).

    • Steve Sailer says:

      A lot of good ideas seem tautological in retrospect, but they were awfully exciting when first being developed.

      One question is whether control systems theory can be extended into other fields that haven’t benefited from it yet. Or has this good idea been thoroughly exploited by now? Control systems are a big deal in Heinlein sci-fi novels in the 1950s, but I don’t hear that much about them lately, perhaps because the field is pretty mature and there aren’t that many bleeding edges anymore.

      • idiotomic says:

        A lot of good ideas seem tautological in retrospect, but they were awfully exciting when first being developed.

        I quite relate to this, especially in the context of control systems.

        I stumbled across control systems in an elective class for BME, titled “Computational Systems Biology and Pharmacodynamics”. It encompasses how circadian rhythm arises (oscillations in general), a mechanism for drug tolerance, why drug efficacy can vary depending on time of day, etc. I find it all pretty fascinating.

        I was very surprised that it was an elective class (and relatively obscure) because I thought that thinking about physiology as a complex system (incorporating control theory and network theory) provides a much more comprehensive/insightful understanding of physiology. I’m not sure if I was just previously exposed to biology/physiology in a non-cohesive way, or if it was a illusory revelation (I weirdly can’t really explain how it changed my understanding, because I soon found that I couldn’t recreate how I used to understand it), but it doesn’t seem like a common concept in biology, and that seems rather supported by this blog post–but perhaps someone can provide another perspective?

        • whateverthisistupd says:

          I don’t know, to me, it seems like pretty common concept in biology.

          This whole post strikes me a bit odd, in the sense that it seems like a lot of people are hearing this stuff for the first time. I guess the audience tends to be more folks with computer and engineering backgrounds then biology/neuroscience?

    • wintermute92 says:

      I think the key takeaway here is having an established, internal feedback loop. There are (a few) goal-oriented agents without any control system.

      The crudest possible goal-directed system is totally open. You try something, if it doesn’t work you don’t update except “try again”. This isn’t necessarily random guessing, you might be working from starting information, but the point is that all you get back is “succeeded” or “failed”. For example, hitting a hole in one in golf: once you’re hitting the middle of the green, you can’t actually tell why you missed – did you slice a tiny bit, or was it wind, or a dirty ball? So you just keep swinging the same way until one sinks.

      A closed system is anything with feedback. If you don’t get the goal, you update in some purposeful way aimed at doing better next time. And yeah, this is almost everything. If you miss the golf green, you adjust your swing to aim better. Even the behaviorists sort of allowed for this: you respond to stimulus, fail, get punishment, and change your response for next time.

      Powers’ insight seems to be slightly more specific: he says behaviorists neglect internal control systems. If you get closer to the car ahead of you, you don’t run through a full-brain sequence of “that leg motion failed, I am punished, I will move my leg differently next time”. You keep a high-level goal constant and use your perceptions to run an internal control system where closer/further aren’t failures, just parts of the low-level control loop.

    • vaniver says:

      Control systems are typically thought of as responding to error, often in a proportional (but not necessarily linear) way. Your satellite is here, you want it to be there, and the system essentially turns the state space into a potential energy bowl centered on the desired state.

      This typically doesn’t involve planning. The planning is all implicit in the design of the control system. A system that, say, is given a traveling salesman problem and finds a good tour probably shouldn’t be described as a control system, or at least, you’ll want very different math to describe it.

  11. klfwip says:

    What exactly a control system is is so vague (And its applications so varied) that not seeing a connection between it and what the brain does somewhere would be more surprising. Why not instead say that the overarching paradigm of every level is likelihood maximization? Or my personal favourite, describing all neural systems as if the neurons and each tier above them are desperately trying to avoid being consumed for another purpose, so they just look at whatever signals they are getting and frantically attempt to come up with _some_ kind of pattern in them in the hopes that another part of the brain will find a pattern in what they do which eventually causes a reward input, reinforcing whatever path caused it.

    And I suppose the cause of all this is that at some level our brains probably do use PCs to control our actions. And they use genetic algorithms. And they use newton’s method, etc. If you try to solve a problem in machine learning by deciding exactly what method you’re going to use and then optimizing parameters you’ll often end up with a noisy slow model that barely seems connected to its inputs. If instead you throw in five or six different models until you find one which looks nice, then run it on some validation data to make sure you’re actually seeing an improvement and run with it you might actually get somewhere. Evolution, like any other algorithm designer, needs to build brains out of components simple and general enough that they could already exist before they had their current purpose. So you churn out millions of animals with slight variations of existing objects wired in to new inputs, and a few of them miraculously start reacting better in certain situations. And therefore, you are going to see fractal repetition of generic and simple objects like a control system wherever you look.

    • Richard Kennaway says:

      There is nothing vague about what a control system is. A control system is something that acts in such a way as to maintain the value of some variable at some reference value, in spite of other influences that would move it away from that value, drawing on a source of energy to do so. Nailing something in place does not make a control system; passively isolating it from external influences does not make a control system; acting upon it so that disturbances are opposed makes a control system.

      The applications are indeed varied: just google “control theory”. They are ubiquitous in engineering.

      Powers’ insight is that they are ubiquitous in living systems, at all levels from the physiologically established low-level details of motor control, to how we act in the world. The upper levels of his proposed hierarchy are avowedly speculative, but the application to psychology (see my other comment about the Method of Levels) does not depend on those speculations.

    • whateverthisistupd says:

      You have to think carefully about what “problem” means in evolutionary terms. Since evolution has simpler components gradually give rise to more complex ones, it seems likely that the same types of “solutions” would be the shortest path on the random walk, and that you would then get something much bigger built out of those parts.

      “If you try to solve a problem in machine learning by deciding exactly what method you’re going to use and then optimizing parameters you’ll often end up with a noisy slow model that barely seems connected to its inputs.”

      Brains aren’t machines in that sense. Nobody decided what to use, stuff that was already there gradually changed to provide a survival/sexual benefit.

      This is a good line of thinking though. What systems do the simplest brains use? What additionally goes on at the next level? How did that emerge from what came before?

  12. Jack V says:

    Huh, that’s really interesting.

    FWIW, wikipedia says behaviourism “emerged in the late nineteenth century as a reaction to depth psychology and other traditional forms of psychology, which often had difficulty making predictions that could be tested experimentally”. So a gross parody might be, psychology started with theories where no-one even expected them to be falsifiable, and then moved on to ones which aspired to falsifiability but didn’t really reach it (Freud?), and then ones which actually produced experimental predictions (behaviourism?) and then ones with predictions which sometimes came up true (after that) and then ones which were verified repeatedly and generally accepted to be fairly certain (in progress)… 🙂

    The control theory is interesting. As soon as you say it, it sounds inevitably true for medium to low level systems, driving etc.

    I’m still thinking how much it’s an accurate description or metaphor for higher-level processes. We clearly have something in our brain which says, “care about others” and “imitate others” and “bow to social pressure” and “learn by experimenting (aka play)”, which presumably implement themselves by commanding various lower-level things in the brain. I don’t know if those are control-theory in the sense of aiming for a specific amount: I suspect more like, they produce a desire of various strengths, and the strongest at a particular point wins.

    And you can see how those kind of things, produce complicated behaviours like “invent communism”. But I don’t know if “invent communism” is best viewed in a control-theory light or not.

    • whateverthisistupd says:

      It’s not. At the higher level, you have different systems interacting in complex ways. And rather then a setpoint, it’s probably more like a flexible range that can be modified by other input. The cortex “thinks’ about all this stuff and you get complex emergent behavior with conflicting impulses and drives.

  13. P. George Stewart says:

    I think as with the earlier post re. psychology test/survey results, what’s wanted to get at the higher tiers is some philosophy, specifically ordinary language philosophy, preferably in collaboration with philosophers.

    There are undoubtedly “things” at the higher level that are “control systems” for something, but the question is whether the categories this guy (or anyone else investigating the topic) is using actually “cut nature at the joints”, rather than simply gerrymandering disparate things into odd, ungainly lumps (hence the dubiousness of the author’s analysis as he goes up the tiers in abstraction, yet at the same time the echoes of plausibility).

    To put it another way, perhaps the categories that we need to think about at the higher levels don’t have a name yet, and the categories we talk about (which have haphazardly developed through a process that’s part social construction) only have a funhouse mirror/vague relationship to the real thing.

    It’s a bit analogous with this: consider the ocean of subtleties in terms of feelings that don’t have a name, yet are operative in our psyche. (Notoriously, sometimes you find these things cropping up in one language but not in another.)

  14. John Nerst says:

    This reminds me of Douglas Hofstadter’s ideas about consciousness. IIRC, he theorizes that human consciousness is the result of self-representation, with self-representation being the result of ever more complex feedback and control mechanisms. In I am a Strange Loop he discusses life* being essentially about feedback and that even the simplest control mechanisms (like a thermostat) should be thought of as containing miniscule amounts of consciousness. It’s a fascinating idea, and (AFAICT) compatible with neuroscientist Antonio Damasio’s theory about core consciousness being a representation of a change of state in a lower level self-representation.

    *This goes very well with the phenomenon that responding to feedback make systems “seem alive” – ref. Null Hypothesis above

  15. shin says:

    I think it is hard on how to think about processes that are supersets of that is already known. The brain must be able to do control theory processes to do many very basic things, however control theory alone can not explain things like unsupervised learning, complex logic and bunch of other things.

    This reminds of the aeon article that states:

    Each metaphor (about the brain) reflected the most advanced thinking of the era that spawned it.

    If the current AI era don’t end in AGI too quickly, I expect someone guy to poke around and come to the conclusion that “the brain is obviously generative adversarial networks” or some other idea, show some illustrative examples and handwave everything that that don’t quite fit in.

    • moridinamael says:

      One of the issues with drawing insight from metaphors is that the metaphor will give you a lot of insight on one level and lead you to be completely wrong on other levels. In order to understand or predict where it goes wrong you have to leave the metaphor behind, which leads you to question its utility in the first place.

      For example, in control theory, you can always draw a diagram that has certain specific attributes, namely the “system” being controlled, the system input (the “knob” that is being actively controlled), the system output (something the system is doing that you’re measuring), and the controller itself. And of course the assumption is that there aren’t extraneous wires and tubes leading into this system from other parts of the chemical plant doing who-knows-what, influencing the system or the controller itself in complex ways. In a real brain, though, I doubt that you’re ever going to have these perfectly isolated control systems that can be written down as block diagrams. Instead you’ll maybe have something that sort of looks like a block diagram with tendrils of information flowing to and from it, leading all over the place, connecting to dozens or hundreds of other brain systems, modulating and being modulated by them.

      This doesn’t mean the control theory metaphor is useless, but it does mean that many of the assumptions we make in order to think about control theory don’t hold true for the brain.

  16. Plucky says:

    The control-systems theory of driving works quite well given the process of actually learning how to drive a car. Car-driving is a skill we learn in mid-late adolescence, and is a skill that is taught explicitly.

    The very first step is at the level of muscle motor control- focusing on how to use your foot to accelerate/brake at the desired rate. This has to happen first because it is foundational-without it, nothing else can work properly. This is why the first driving lessons happen in empty parking lots. The systems control framework might describe it as it being impossible (or extremely difficult) to train higher-level control systems if the lower-level systems are unreliable. It would also jive with limited-attention if we posit that attention operates at the control-system level, i.e. that we can only attend to one (or finite) system at a time. If the this-is-how-much-pressure-to-apply-to-the-brake-pedal-with-your-foot system isn’t trained well enough to work on autopilot, then you have to pay attention to that system instead of the I-need-to-check-my-blindspot-before-changing-lanes system.

    If the skill of ‘driving’ is constructed as hierarchical control systems, then each system in the hierarchy cannot be trained until the lower-level systems it directs operate with limited error.

    It’s not all hierarchical systems though. The key to good driving is being aware of one’s surroundings. The top-level control system of a good driver is the ‘monitor for unexpected or potentially unsafe events’ system. It’s what a programmer would describe as event-driven programming and is not really right to describe as a ‘system’ because it’s not systematic. It picks up an event, decides what to do, and then engages the subsidiary control systems. It’s the executive-level of driving, and the whole point of training all the subsidiary systems is precisely so that this non-system system can get one’s finite attention. However, this system is not strictly necessary for driving- you can get from here to there just fine (most of the time) if that system is impaired by a phone or alcohol, just with a much higher risk that the unexpected won’t get dealt with and will kill you. Drivers with blackout-drunk .18 BACs still make it home the majority of the time. Furthermore, plenty of people never really get to ‘that level’ of driving- many drivers top out at the maintain-distance-between-cars system, and spend the majority of their attention focused on the bumper of the car in front of them

    The problem then with the vague-er, higher-level control systems is that Powers describes and about which you are skeptical is that there is that there may not be necessarily a neurological template for such a system, because such systems are not strictly necessary for survival. Hell, some people may not even have them at all, like they top out at middle-management. Hierarchical systems are built from the bottom-level up, and each new layer in the hierarchy must be constructed, and in many case constructed consciously. For lower-level ones there’s plenty good reason to assume they will work the same way across humans because we evolved neurological hardware to make constructing such systems easy, but I would bet that the higher one gets in the hierarchy, the less-similar people’s control systems get neurologically because building them goes from using built-for-purpose-neurology to Junkyard Wars/Scrapheap Challenge (https://en.wikipedia.org/wiki/Scrapheap_Challenge)

    • Eva Candle says:

      *-*-*-*-*-*-*-*-*
      “Global brain dynamics don’t look like the dynamics of a well-functioning control system, which would fluctuate in response to input, but always tend to stably return to some fixed rest state.”
      *-*-*-*-*-*-*-*-*

      There are some pretty celebrated people, and some pretty strong research, that points in the opposite direction.

      It was Lars Onsager–himself both a Nobel-winning theoretical chemist and a classic “Aspie”–who said: “It [the electroencephalogram] is like trying to discover how the telephone system works by measuring the fluctuations on the electric power used by the telephone company.”

      Nowadays Onsager’s program has been carried to completion, by correlating space-time correlations in local brain metabolic rate with the anatomic connectome of nerve-fibers. There is a vast literature, see (e.g.) “The parcellation-based connectome: limitations and extensions”, “Parcellating cortical functional networks in individuals”, and “A multi-modal parcellation of human cerebral cortex” (and many more).

      The emerging consensus is that the observed strong correlation of functional-metabolism with connectome-anatomy is solid evidence that the modern Kandel-style brain-synthesis (op cit) is pretty much right.

      What about the large-scale coherent oscillations that are so striking evident in electroencephalograms? These were anticipated by von Neumann in his (posthumous) US patent “Non-linear capacitance or inductance switching, amplifying and memory organs”. Von Neumann envisioned computers in which information was stored and transmitted, not by the amplitude, but by the bistable phase of large-scale coherent oscillations.

      If today’s computers are (in effect) AM radios, then what von Neumann had in mind were (in effect) FM radios. The relative immunity of the latter kind of radio to many sources of noise is familiar to pretty much everyone.

      And yes, von Neumann’s brain-inspired electronic computers were built—and sold commercially in Japan—and they worked just fine! 🙂

      All of this good evidence that the longed-for “Darwinian Synthesis” is in fact well underway, and moreover has plenty of scientific, mathematical, and medical overlap with what the evolutionary biologists call the “Modern Synthesis” — the synthesis of Mendelian genetics with Darwinian evolution.

  17. Eva Candle says:

    Kleiner’s survey “Abstract (Modern) Algebra in America 1870–1950: A Brief Account” (in the recent MMA compendium “A Century of Advancing Mathematics”, 2015), documents how “abstract modern algebra” (as it was called in the 1950s) evolved to become simply “algebra”.

    Similarly, Eric Kandel’s recent “Reductionism in Art and Brain Science: Bridging the Two Cultures” pretty thoroughly documents how “neuropsychology” has mostly finished evolving to become simply “psychology”.

    Of course, “psychology” so evolved is no more a finished scientific topic, than “algebra” is a finished mathematical topic.

    In particular, Kandel’s chapter 3 “The biology of the beholder’s share” pretty nicely summarizes the themes that Will Powers develops in “Behavior: the Control Of Perception” and does a more graceful job than Powers of linking these ideas into a broader scientific, mathematical, and cultural context.

    For example (from pages 35-7 of Kandel’s “Reductionism in Art and Brain Science”):

    *-*-*-*-*-*-*-*-*
    In terms of the new science of mind, the highest level is the brain, an astonishingly complex computational organ that constructs our perception of the external world, fixes our attention, and controls our actions.

    The next level comprises the various systems of the brain: the sensory systems such as vision, hearing, and touch and the motor systems for movement.

    The next level is maps, such as the representation of the visual receptors of the retina on the primary visual cortex.

    The level below maps is that of the networks, such as the reflex movements of the eyes when a novel stimulus appears at the edge of our visual field.

    Below that is the level of neurons, then synapses, and finally the molecules.
    *-*-*-*-*-*-*-*-*

    As usual in modern neuropsychology, the subsequent discussion reasons, in brief, that the hierarchy of neural anatomy is inarguable, and (inarguably too) neural function reflects neural anatomy.

    Kandel’s book then goes on to explicate, in plain non-technical language yet in considerable depth, the workings of a culturally crucial human activity, that Powers’ worldview does not easily explain, namely the psychological processes (both cognitive and anatomic) that are associated to everyday activities like drawing pictures and viewing pictures.

    Of course, Kandel’s book is about more than the creation and appreciation of art: it aims at an understanding of human psychology in its entirely (as does Powers’ work). But of the two authors, Kandel and Powers, it’s pretty clear that Kandel’s work comes closer to achieving its synthesis, in that Kandel’s markedly exceeds Powers’ in depth, in breadth, and (especially) in pedagogic influence.

    These are the reasons why, just as “abstract modern algebra” became simply “algebra” (to the great benefit of mathematics and science), Kandel-style neuropsychology has become simply “psychology” (to the great benefit of medicine and the humanities).

    To borrow the phrasing of the OP, Kandel-style neuropsychology is today’s Darwin-style/Newton-style synthesis that in 21st century psychology (for many psychologists, especially students, but not all psychologists) “ties everything together”. Like algebra in its modern abstract realization, the Kandel-synthesis admirably supports both open-ended creativity and practical applications.

    Of course, the Kandel-synthesis (in contrast the Powers-synthesis) is designed to be incomplete, because grand syntheses never are complete. That’s why they’re “grand”. 🙂

  18. Skivverus says:

    Definitely a useful model; a potential refinement comes to mind.
    Specifically, feedback loops. The “high-level” systems can direct or override the lower ones (see other comments on piano or driving), but there’s no reason to think the influence only goes one way – consider touching a hot stove. The low-level systems don’t bother consulting higher-ups at all here, but they do let them know not to send them that way twice.
    Similarly, fight-or-flight responses generally aren’t a result of high-level thinking.
    So we can say that either (a) not all control systems are limited to managing lower-level systems (though levels are still probably a useful framework), or (b) not everything contributing to behavior is a control system. Mostly I think that depends on the definition of ‘control system’, which I’m not wholly clear on myself; it’s at least plausible to me that (~b) is true (and therefore (a)) if attention counts as a control system.

  19. The Red Foliot says:

    Why does the abstraction of ‘communism’ have to be a controlling factor? I would rather say communism is a social adjunct used to express one’s inner feelings, but that it is one’s inner feelings themselves that control one’s thoughts and actions. External inputs and the fundamental architecture of one’s brain work to impel one to action by astounding it with different feelings. Those feelings can vary in intensity; an intense feeling can provoke a dramatic response, a weaker one, a mild response. They pull you in different directions; their intensity drops as they reach parity; but there are so many of them, and such are the vagaries of life, that the needle of your compass, this collection of your inner feelings, is sometimes thrown awry so that it must violently reassert itself. A person who’s emotions are fundamentally awry might always find himself in a state of high passions. He might be driven to communism for its radical nature, one that matches the unsettled emotions of his own mind. Thus, ‘communism’ is not the impulse, but the response to one.

  20. Mengsk says:

    The more I think about it, the more it seems like the idea of a control system is implicit and fundamental to cognitive psychology, in the same way that the idea of “preference” would be for economics. I don’t know if this sort of construct can be falsified empirically.

    “For example, hide Lenin’s pen and paper so that he can’t write communist pamphlets, and he should start doing some other communist thing more in order to make up for it and keep his level of communism constant. I think some perceptual control theory people believe this is literally true, and propose experimental tests (or at least thought experiment tests) of perceptual control theory along these lines. This seems sketchy to me, on the grounds that if Lenin didn’t start doing other stuff, we could just say that communism wasn’t truly what he was controlling.”

    Applied to economics, the analogous experiment would go something like this:
    Hypothesis: a person has a preference for X.
    Intervention: you block one route by which that person could satisfy their preference for X

    If the person tries to find another way to satisfy their preference for X, then we conclude that they truly do have a preference for X. If they don’t, then we conclude that their preference for X wasn’t that strong begin with. The notion that this person’s behavior is driven by their desire to satisfy a preference doesn’t seem like a falsifiable conclusion, but an assumption that you have to make if you want to analyze their actions through a certain theoretical lens.

    Of course, it’s been pointed out that trying to explain human behavior solely in terms of rational action to satisfy preferences doesn’t work in many situations (e.g. when you’re dealing with mental illness, or in any situations where cognitive biases have a significant impact). It seems like attempting to use control systems to explain higher level cognitive activities is a similar genre of error.

    • David Condon says:

      In a choice experiment, a preference is demonstrated when an individual repeatedly chooses one action rather than another. It is a behavioral claim grounded in behaviorist ideas. What you are referring to is a motivation. A claim regarding preferences is not a claim regarding motivation. Economists sometimes use terms like utility, preference, and value interchangeably so the confusion is understandable.

  21. Michael Arc says:

    Aren’t social systems like police forces explicitly constructed as control systems by their creators?. Military systems certainly are. If so, this isn’t so much an explanation of a natural phenomenon. More like a musty and discarded user manual.

    It seems to me that many liberal institutions were explicitly rebuilt from the basis of a control systems philosophy after WWII. Six sigma is the invention of a control systems concept corresponding to invention. Modern portfolio theory is the application of control systems to financial investment. There was a time when undergrad physics skills applied to the creation of control systems analogues to social institutions could be awarded with an instant Nobel Prize.

    • Scott Alexander says:

      I think it’s certainly true that people designed the police in order to prevent crime, but I don’t know to what degree it’s useful to think of them as a control system. IE could a cyberneticist predict/understand things about policing that a criminologist/social scientist/police chief couldn’t?

      • AnonYEmous says:

        probably police could be termed an externalized control system which is less effective than an internalized control system. Proven to be effective enough to be semi-linear (more policemen less crime, I would imagine, is pretty non-controversial, if not 1-to-1) so people are willing to ratchet up the amount of police dollars to reduce the amount of crime

        • whateverthisistupd says:

          Not as linear as you might think. There have been some recent examples, like the “soft strike” by police In New York, when crime actually went down. And this did not appear to be simply a matter of it not being recorded. (It was a while ago I read this, but the basic theory was that since police were only responding to serious crimes like thefts and murders, those went down, since the focus of the police was less diluted. This stands in contrast to the “broken windows theory”) There were some other examples of this phenomenon that I would have to find. I recall one from some country where essentially, the police force was extremely corrupt, and by popular demand, there was a massive reduction of police, and crime went down (which makes sense if a lot of the crime was being done by the police or under police protection.)

          Of course, there’s a lot of confounding factors- what one means by crime, the length of time, the prior social, political, and economic conditions.

          If you’re interested in this I could dig up the resources-admittedly my main source for compiling this info is partisan, but that doesn’t necessarily mean the information is false.

    • Douglas Knight says:

      Modern portfolio theory is the application of control systems to financial investment.

      Do you just mean that rebalancing your portfolio is negative feedback? (I guess merely acknowledging that uncontrollable noise exists is a pretty good step.)

      Scott: William Bratton, the most famous policeman of our time, is mostly famous for broken windows policing, which doesn’t seem much like control theory, but is also famous for “compstat,” keeping track of local crime rates and redistributing the police, which is certainly an example of negative feedback.

      • Scott Alexander says:

        You can become a famous policeman by saying “move police officers where the crime is”?

        • vaniver says:

          Didn’t someone become a famous criminologist by pointing out that when imprisoned, people can’t commit crimes against the general public?

        • Chris Hibbert says:

          It was more like “if we measure crime tied to location data, we can move police officers to where the crime is.” There were probably already police administrators who moved cops around, but like with the Moneyball revolution, actually measuring stuff makes a difference.

        • whateverthisistupd says:

          You have to keep in mind many police stations won’t hire people who score too high on their iq tests. Really.

    • moridinamael says:

      I’m afraid that using the phrase “control system” in this way bends the metaphor beyond the breaking point.

      The thermostat is a good canonical example because it exhibits all the necessary and sufficient properties of a classical control system. There is some signal that the thermostat measures (temperature), and the thermostat directly controls an “AC is on” or “heater is on” switch. There may be a more complex model of the whole system coded into the thermostat so that it can, for example, turn on the heater in anticipation of the temperature falling too low, but there doesn’t need to be.

      If we’re going to call the police force a “control system” in a technical sense, we would need to identify what variable(s) the police are in direct control of, what variable(s) they are trying to keep within certain bounds, and the rules by which they are manipulating the former in response to changes in the latter. I don’t think police think of themselves this way, nor do I think that politicians or bureaucrats think of the police this way.

      I think it’s probably important for clear thinking that we avoid conflating the colloquial idea of a “system intended to control something” with a “control system”. I do think you’re exactly right that “targeting specific outcomes via a well-developed theory of the underlying system” was a powerful new meta-level idea in the 20th century, especially as applied to social systems. But the analogy between control systems and human psychology that Powers is making is a much bolder and more precise claim.

      • Michael Arc says:

        Modernity only had a few hammers, police were surely one of its nails.

      • whateverthisistupd says:

        Yeah, they’re only a control system in the sense of a system that controls thing, no connection to the scientific/engineering meaning.

  22. jasongreenlowe says:

    Let me see if I understand this theory.

    There are nine tiers of control that humans can understand. The lowest tier deals with concrete, physical manifestations (Malchut), like twitching a single muscle cell. The next two tiers up involves the kind of muscular coordination that would be involved in receiving a message from the brain (Yesod) or dancing (Hod). Somewhere around the fifth or sixth tier is a layer that’s primarily responsible for keeping things in the proper relationship to each other, like making sure you’re the right distance away from a car (Tiferet). The seventh tier is responsible for the conscious performance of specific skills like driving or hammering (Da’at), the eighth tier is responsible for emotional relationships with other people (Binah), and the ninth tier is responsible for abstract, intellectual systems like communism (Chochma). Nobody has ever observed a tenth tier, but if they did, it would probably be an abstraction of abstractions — whatever that means — or, in some sense, a layer that was beyond all abstractions (Ayn Sof).

    • Scott Alexander says:

      Huh!

      I didn’t mention it in the review, because it seemed too hokey, but Powers actually mentions for a sentence or two that maybe there’s a tenth tier that corresponds to “enlightenment”. I’m going to consider your theory empirically confirmed.

    • Cjcashel says:

      This is fascinating to me as someone with an interest in cognition and kabbalah, but wouldn’t dancing better correspond to netzach?

  23. humeanbeingblog says:

    Behaviorism was an outgrowth of logical positivism. As AJ Ayer argues in Language, Truth, and Logic, the view that all meaningful claims are logical constructs of sense data implies that claims about other minds are either meaningless or logical constructions of sense data. To avoid solipsism, then, we must say that claims about other minds are logical constructions out of sense data. That means that the literal meaning of claims about the mental states of others is to be analyzed in terms of your experiences of them.

    That’s a little abstract. For those of you who might not be familiar with Ayer and Russell, a simpler way to put it is this: If you’re a certain kind of hard-eyed, bullet-biting empiricist (the Dan Dennett type), the idea that there could be these things, feelings, that other people have is outrageous. It’s spooky and unverifiable. Other people’s feelings aren’t scientific at all. The only thing we can meaningfully, scientifically discuss is what we can observe: actions.

    • Enkidum says:

      Speaking as a hard-eyed, bullet-biting empiricist whose MA supervisor was supervised by Dennett, I don’t think that’s the view he has at all. (Also not the view of the Churchlands, who are the other people usually brought up as modern behaviourists.) To be fair, I think he’s remarkably unclear about what his view of conscious experience is, and I’m not sure he has a coherent one at all – he spends most of his time explaining what it isn’t.

    • Said Achmiz says:

      This is a very inaccurate description of Daniel Dennett’s views.

      Dennett is quite fine with the notion that feeling exist and that people have them. He does not claim that there’s any problem, in principle, with the idea that people “have” “feelings”, or with any particular claim about these “feelings”, etc.

      Dennett does, however, point out (quite reasonably, imo), that if we base our ideas about what feelings are, and what feelings people have, etc., solely on those very people’s self-reports — that is, if we just treat people’s self-reports about their own mental states as literally true and infallibly correct — well, that is, indeed, unscientific. Might’n’t someone be mistaken about their own mind, in some way? (Decades of cognitive psychology research — and, indeed, ordinary experience — tells us that people certainly can be mistaken thus!)

      Dennett’s notion of “heterophenomenology” says that we should neither discount a person’s statements about their own internal state, nor treat them as infallible. Rather, we should take such self-reports and place them alongside all of the other evidence that we have about what’s going on in that person’s mind. (Which might be their behavior, it might be the output of an fMRI, it might be inconsistencies or contradictions in the self-report, it might be any number of things.) Sometimes the conclusion we come to is that the self-report is a more-or-less accurate portrayal. Other times the conclusion is that no, what’s going on in the person’s mind is really not much like what they say is going on.

      But even in the latter case, note that the conclusion isn’t that the heterophenomenological report is false! That is: the subject says to us, “I feel X.” To which we respond: “Yes, no doubt that is as honest a description of your subjective experience (what of it is available to your conscious introspection and verbal output) as you can make it. But as an accurate portrayal of your mental workings, it simply fails. What’s actually going on in your mind is Y…”

      If that scenario sounds absurd to you, consider this TED talk by Dennett (it’s about 20 minutes and rather entertaining), which should give the flavor of the thing. (In it, Dennett talks mostly about optical illusions of various sorts, which is the most obvious application of the idea; for the harder cases, you’ll have to read his books and essays.)

      • humeanbeingblog says:

        My parenthetical reference to Dennett was clearly misunderstood. I hoped it would clarify a point that I was trying to make; instead, it seems to have obscured that point entirely.

        I was not trying to attribute ANY of the views I discussed to Dennett. I was discussing the views of early positivists like Ayer. Dennett is not a positivist. BF Skinner was a positivist.

        The reference to Dennett was only to draw a connection between Dennett and Ayer that might help us, today, understand Ayer’s (and Skinner’s) mindset. Dennett is willing to endorse some claims that are pretty radically counter-intuitive because they follow from his own conception of the scientific world-view. That’s a feature of Dennett that I hoped readers of this blog are familiar with. The point I was trying to make is that early behaviorists have this feature, too. Behaviorism is a consequence of logical positivism, which is one form (not Dennett’s form!) of hard-headed empiricism. It’s counter-intuitive to say that mental states are constituted entirely by individual behaviors. But that is what logical positivism implies; Skinner followed that view to its natural conclusion.

        Apologies for the confusion.

      • whateverthisistupd says:

        I believe Enkidum was referencing Dennet’s seemingly radical eliminativist take on consciousness.

    • Protagoras says:

      Behaviorism in psychology seems to have developed on its own; not that there was no mutual influence with philosophy, but to say it came out of Ayer gives him entirely too much credit, and to say it came out of Logical Positivism is equally untrue, in addition to being further problematic because not all Logical Positivists were even behaviorists. It is a pet peeve of mine when people equate the whole of the Logical Positivist movement with Ayer’s rather distorted and oversimplified summary. And as others have said, you are also totally unfair to Dennett. Oh, and Russell.

      • humeanbeingblog says:

        I did not say that behaviorism comes from Ayer. I said it comes from positivism. Perhaps I went too far in suggesting that positivism was the sole motivation behind behaviorism. Obviously, things are complicated. But Skinner was a logical positivist (or so I’ve read), so it doesn’t seem crazy to claim that this had an effect on his views on psychology.

        Not all positivists are behaviorists. Not all behaviorists are positivists. But in the 30s, when these theories were gaining prominence, there was substantial overlap between these categories.

        ETA: Fair enough about not taking Ayer as encapsulating the whole of positivism. I know that Carnap is generally considered to be the better source; I cited Ayer only because that’s who I’m more familiar with.

        • David Condon says:

          The connection you’re looking for is Ernst Mach -> BF Skinner. So yes, there was some overlap between the ideas.

  24. Richard Kennaway says:

    Scott, you might be interested in looking up the work of Warren Mansell (https://www.research.manchester.ac.uk/portal/Warren.Mansell.html), Tim Carey (http://www.flinders.edu.au/people/tim.carey), and their colleagues in applying PCT to psychotherapy, under the name “Method of Levels”. The Method of Levels is an idea that Bill came up with, initially as just an interesting mental exercise in exploring the ideas of PCT. He left it out of the book you have just read for being insufficiently developed at the time, but in discussions with others, some psychologists became interested in using it as a method of therapy.

    I know little of psychology, so I can’t usefully say more than that. I don’t know if it has yet been subjected to any experimental tests of efficacy.

    BTW, while calling him “Will Powers” makes a cute joke, he was known as Bill to those on first name terms.

    • Richard Kennaway says:

      I have just remembered that one “Mark” gushed embarrassingly here about the Method of Levels a couple of months ago. I hope that experience has not poisoned the well for you.

  25. nhnifong says:

    I think in Jeff Hawkins’ On Intelligence the brain is described very similarly. Firstly, because Hawkins’ neural networks were pretty good at at perception (and the latest deep nets even better), it’s very tempting to frame control as an extension of perception, so we can use our powerful perceptual machines on new problems.

    Secondarily, it seems to be a convenient explanation of how the cortex evolved – Call everything in the brain that existed before the cortex the reptilian brain. Assume the cortex is a uniform perceiving system that compresses information and makes predictions. If it were made to perceive the autonomous actions of the reptilian brain in addition to the signals from the senses, it could offer incremental value by predicting what the reptilian brain would do to maximize utility, better than the reptilian brain itself can do.

  26. Eli says:

    Well of course it’s control underneath! Now what do I have to pay to get you to read Andy Clark’s Surfing Uncertainty, which came out this past year? It integrates the “Bayesian brain” and “perceptual control” paradigms to begin explaining almost, but not quite, everything the central nervous system does.

    • Scott Alexander says:

      That book has already been vaguely on my radar, but I’ll try to get to it more quickly now.

      • Eli says:

        Woot woot! It’s related to a bunch of the Bayesian neuroscience papers you covered a few months ago.

    • Richard Kennaway says:

      I noticed that chapter 7 of Andy Clark’s book references Powers in passing as “(Powers 1973, Powers et al., 2011)”. However, it omits these references from the bibliography, and the reader can only guess what they might be. Powers 1973 must be “Behavior: The Control of Perception”, but I am not sure of the identity of the other.

      EDIT: I have contacted the author and he will be correcting this for later editions.

  27. bara says:

    A few comments:
    – It’s obvious (and falsifiable) that the theory works for the lower levels, not so much for the higher ones. Not having read the book I don’t know if Powers treats his theory as a model (just abstract the observations and predict them) or a mechanism (this is exactly how it works). Pretty important to determine the criticism it deserves.
    – A very similar multi layer approach that (mostly) eschews formal control theory is presented by Rodney Brooks (recommended reading Flesh and Machine)
    – Scott’s skepticism for this theory mirrors a lot of my skepticism on the dangerous AI theories, it is clear what happens at the simpler levels, but once we start compounding it is assumed the same model will hold, which is unlikely (both here and in the unfriendly AI field)

    Maybe some more later, but I should get back to work.

    • Richard Kennaway says:

      A few answers:

      I’m not sure what distinction you’re making between a mechanism and a model. I suspect that I do not mean the same thing by a model that you do. I would apply both words to Powers’ theory. The theory does not specify how, physically, the control systems are implemented, but it does say that they exist.

      Brooks’ subsumption architecture is a hierarchy of agents, but bears only that superficial resemblance to PCT. (Hierarchies are ten a penny. Almost everyone’s theory has a hierarchy.) In subsumption, higher-level agents operate instead of lower-level agents; in PCT, higher-level controllers operate by means of lower-level controllers (by setting their reference levels). In control theory, the latter pattern is called cascade control.

      I have no particular ideas about dangerous AI, but the proper use of a fundamental theory is not to attempt to predict everything from scratch. To repeat an analogy I made in another comment, the idea of atoms doesn’t let you predict what chemical elements exist, and no-one would expect it to. To say, “This wonderful new idea explains everything! Now we just have to apply it!” is always wrong, and Powers himself never claimed anything of the sort. He always said that the upper parts of his proposed 9-level hierarchy (11-level in later writings) was speculative, and advocated working at the levels where you could actually demonstrate things working as claimed, such as simple tracking tasks. Unglamorous but necessary spade-work.

      “It works! Now we just have to scale it up.” is regularly said of every advance in AI, and if PCT-inspired architectures ever get traction in that field I expect it will be said of that as well. The hype is always to be ignored.

  28. RebusGlider says:

    Perceptual Control theory and the Lenin example in particular seems related to <a href="https://en.wikipedia.org/wiki/Self-licensing"Self-licensing, the tendency of people to relax their standards of behaviour when they have increased confidence in their self-image.

  29. TomA says:

    When talking about characteristic traits of our species (e.g. higher order brain function versus lower order brain function), that which exists is that which works after millennia of evolutionary development. It still works regardless of whether or not we have successfully identified an accurate paradigm of function.

    Science frequently gains insight into difficult problems by studying deviation from the norm, and it usually starts with simple models that only add complexity as needed. And chaos always rears it’s head whenever the model gets very complex.

    Perhaps the only way we will ever get to an accurate understanding of higher brain function is by letting AIs evolve and noting what happens as they cross the threshold into full sentience.

  30. philkidd says:

    The neuroscience literature is absolutely lousy with examples of the brain acting as a control system for behavior. The example of tremors is discussed at length in Norbert Wiener’s original book on cybernetics, and I believe for the case of essential tremors, the view that the disorder is caused by malfunctions in proprioreceptive feedback is well-accepted. Some other examples:

    1) Juvenile birds start singing basically random notes, but eventually learn to mimic the songs of their parents, essentially by comparing the two sequences of notes and generating an error feedback signal. The signaling pathways mediating this process have been mapped out in the brain of songbirds, and the presence of the error signal directly observed by electrophysiological measurements.

    2) Sensory adaptation works by targeting a set point for the activity of a sensory signaling pathway, and using feedback to return to this set point when external conditions are not changing too quickly. The pupillary light reflex is one example. Another is the dynamics of hair cell bundles in the cochlea, which maintain sensitivity by always returning to their basal level of activity even if a fixed force is exerted on them. It’s seems very likely that similar mechanisms are at work in the case of drug tolerance (mentioned above).

    3) Many types of precise motor actions rely on feedback control. Nicolas Minorsky actually designed an early version of the now-standard proportional-integral-derivative (PID) control strategy by observing ship helmsmen moving the wheel while keeping a ship on a fixed course.

    In spite of this I think there are good reasons to think that our brains really aren’t big hierarchical control systems. One is that global brain dynamics don’t look like the dynamics of a well-functioning control system, which would fluctuate in response to input, but always tend to stably return to some fixed rest state. Brain dynamics have all sorts of oscillations, and chaotic-looking, unpredictable phenomena. One nice example is from a recent imaging experiment observing whole-brain dynamics in nematodes at single-cell resolution (Kato S et al. Cell, Oct 2015), which show that the global brain dynamics are dominated by a stochastic cycle that switches unpredictably between various branches of a complicated attractor. So even a really simple brain really doesn’t look like it’s just a stable feedback control system.

  31. negative_utilitarian says:

    A lot of what you find interesting in this book isn’t really original to Powers: this sort of general notion of a ‘control structure’ and the awe-struck reverence for how useful the concept is generally from everything to biology, engineering, sociology, or economics, was more or less what cybernetics was before its untimely demise. A significant number of the people involved in the early history of cybernetics were psychologists or sociologists: Gregory Bateson, Ross Ashby, Warren McCulloch… and they all were very interested in using these perspectives (notions of feedback and control) to model perception and behavior. Consider some of the iconic early works of cybernetics, What the Frog’s Eye Tells the Frog’s Brain, or the paper that originated the notion of a neural network, A Logical Calculus of Ideas Immanent in Nervous Activity.

    Before cybernetics was just a generic filler word indicating something techy/sciency, it was a really weird and interesting cross-disciplinary intellectual movement that produced an amazing decade or two of good work before collapsing in on itself (think early AI, but worse). Unfortunately, the post-collapse version of cybernetics was even more grandiose and far louder than what came before, which did a lot to obscure it from history, while the useful stuff was quietly absorbed into other fields.

    If you are interested in this sort of thing and want to engage with it a bit more, I recommend reading either Norbert Wiener, or Ross Ashby. Ross Ashby probably presents the clearest and easiest exposition as to what cybernetics was when it was still a good thing, so I would recommend reading stuff by him if you have your serious hat on and want to read the best and clearest arguments to support or refute. I would highly recommend reading Norbert Wiener if you want an eloquent and opinionated salespitch by the person who both founded the field and more or less killed it.

    I would also recommend reading Norbert Wiener if, for whatever reason, you happen to be interested in what happens when you do borderline unethical things to children to try to deliberately produce educational results, or happen to be interested in the effects and risks of automation, or happen to like grandiose visions of the world in which the forces of entropy and disorder are personified as some sort of demonic being.

    All of that being said, these are some books I would recommend. The first two are really short, and I would highly recommend either of them if you only choose one.
    =Norbert Wiener=
    * Cybernetics: Or Control and Communication in the Animal and the Machine
    * The Human Use of Human Beings
    * Ex-Prodigy: My Childhood and Youth
    =Ross Ashby=
    * An Introduction to Cybernetics
    * Design for a Brain

    • Eva Candle says:

      *-*-*-*-*-*-*-*-*
      “I would highly recommend reading Norbert Wiener if you want an eloquent and opinionated salespitch by the person who both founded the field [of cognitive control theory aka “cybernetics”] and more or less killed it. …. you happen to be interested in what happens when you do borderline unethical things to children to try to deliberately produce educational results … grandiose visions of the world in which the forces of entropy and disorder are personified as some sort of demonic being.”
      *-*-*-*-*-*-*-*-*

      This x 100.

      Also worth mentioning: Wiener’s personal experience of a constellation of disorders including Aspergers, depression, bipolar disorder, BPD, and persistent suicidal ideation … coupled with a 5-sigma high-IQ, an a lifelong embrace of psychotherapy, combined with intense skepticism of both the foundations and the methods of psychotherapy.

      Also mixed-in: Norbert Wiener’s complex-yet-distant relation with an still-higher-IQ brother (Fritz Wiener) who was institutionalized for three decades, and released only with the advent of effective antipsychotic medications.

      So yes, Wiener’s life presents numerous overlapping themes that are of broad interest to SSC readers.

      To add to Negative_Utilitarian’s excellent reading list:

      — The biography by Flo Conway and Jim Siegelman, “Dark Hero of the Information Age: In Search of Norbert Wiener” (2004). SIAM News reviews this biography in an article “The Inner Turbulence of Genius: Norbert Wiener”.

      — Norbert Wiener’s sole novel (!) “The Tempter” (1959) — the semi-autobiographical study of the collision between cybernetic values and economic values.

      The concluding sentences of “The Tempter” (written when Wiener was 64) are a harsh assessment:

      *-*-*-*-*-*-*-*-*
      I myself had been neither flesh, fowl, nor good red herring. I had betrayed my hero Woodbury [i.e., the spirit of inquiry] and my sometime companion Dominguez [i.e., the spirit of enterprise]. Above all, I had betrayed my own conscience and my own instincts of decency.

      My life from now on could be nothing but a secret penance. I was no longer young, well beyond the point when I could hope to atone for my misdeeds by a new turn toward righteousness. My powers were on the wane. The tally of my deeds was complete. There was nothing left for me but to make way for younger men [sic], in the hope that they would not follow in my footsteps.
      *-*-*-*-*-*-*-*-*

      Was Wiener assessing himself, or the entire scientific enterprise of his times?

      Either way, Wiener’s life and works hold broad interest for SSC fans. Thanks, Negative_Utilitarian.

      • reasoned argumentation says:

        Also worth mentioning: Wiener’s personal experience of a constellation of disorders including Aspergers, depression, bipolar disorder, BPD, and persistent suicidal ideation … coupled with a 5-sigma high-IQ, an a lifelong embrace of psychotherapy, combined with intense skepticism of both the foundations and the methods of psychotherapy.

        Also mixed-in: Norbert Wiener’s complex-yet-distant relation with an still-higher-IQ brother (Fritz Wiener) who was institutionalized for three decades, and released only with the advent of effective antipsychotic medications.

        So yes, Wiener’s life presents numerous overlapping themes that are of broad interest to SSC readers.

        Sounds more like it’s right up the ally of Greg Cochran – strong selective pressure for only intelligence in an isolated reproductive population led to all sorts of neurological dysfunctions as side effects.

        • Eva Candle says:

          At least one reader begs leave to disagree, progressively and emphatically, with the toxic ‘Galt wight’ framing of the above self-described “reasoned argumentation”.

          To borrow from Twain: “The heaven-born mission [of SSC] is to disseminate truth; to eradicate error; to educate, refine, and elevate the tone of public morals and manners, and make all citizens more gentle, more virtuous, more charitable, and in all ways better, and holier, and happier; and yet these blackhearted scoundrels [the so-called ‘Galt wightes’] degrade their great office persistently to the dissemination of falsehood, calumny, vituperation, and vulgarity.”

    • Richard Kennaway says:

      A problem with “cybernetics” is that a lot of what passed under that name showed little real understanding of what is and is not required for control to be successful. Ashby in particular thought that negative feedback control was necessarily inferior to control by means of sensing and predicting disturbances and computing the actions to take against them in advance. Observe that when you drive a car you are not aware of most of the disturbances tending to make your path deviate from the intended one — the wind, the uneven road, and so on. If you put an enormous effort into a control system of Ashby’s proposed form you might, I daresay, get some small improvement, but you would still have to deal with unmeasured disturbances (because there will always be some), and at any rate, none of that is relevant to an understanding of what a control system is.

  32. apm says:

    There are occasional claims that perceptual control theory can predict certain things about muscles and coordination better than other theories, sometimes with absurdly high accuracy of like r = 0.9 or something. Powers makes some of these claims in the book, but I can’t check them because I don’t have the original data he worked with and I don’t know how to calculate cybernetic control system outputs.

    Here you can check one of the claims and do the simple pursuit tracking experiment experiment right in the browser.
    The claim is that human actions in the pursuit tracking task can be modeled very accurately by a control system. The actions of a human (when he is focused and actually trying to do the task well) and a control system correlate with over r=0.95, easy; and RMSE ~ 2-3%.
    This is just to say – yes, there is very likely a biological control system with these specific properties doing the task, and not some other kind of system.

  33. Garrett says:

    How does this model address issues of learning and mastery?

    For example, when I started driving, I found it exhausting. Driving for about 30 minutes, managing my speed, vehicle lane centering, looking for dangers, etc., left me tired. I actually hated doing it. I also wasn’t very good. I could barely keep my speed constant and I was frequently cutting curbs.

    With years of practice I’m now able to drive 12h at a time and mostly be bored during the process.

    Does this involve pushing down this kind of expertise down the stack so that it doesn’t require nearly as much conscious effort? Does it involve neuronal specialization so that these circuits have an appropriate level of stability?

    • Richard Kennaway says:

      See the chapter on reorganisation (of the control hierarchy). In brief, Powers’ hypothesis is that reorganisation is a random process of changing weights and connections, and happens faster when errors are increasing than when they are decreasing. “Error” here means differences between a control system’s perception and its reference. He made some computer simulations of this. One is of an arm operated by 14 muscles, with a perception from a pair of virtual eyes of where its virtual hand is in space, and a control system which learns to keep the hand on a moving target. (Only weights are changed in this model, not topology.)

      This hypothesis, together with the idea that (in people) reorganisation follows attention, also motivated what became the “Method of Levels” that I mentioned in another comment, which is currently being explored as a method of psychotherapy by several psychologists.

      The hypothesis arose from considering the question, how can a hierarchy of controllers change? Since clearly it does, such as when we learn a motor control task. The changes cannot be made by yet another control system, since that leads to an infinite regress. How can errors (in the above technical sense) be reduced when the current control hierarchy is not doing a very good job? An answer is suggested by the example of chemotaxis in E. coli. It alternates between swimming in a straight line and randomly changing its direction. It changes direction more frequently when the conditions around it are getting worse and less often when they are getting better. The result is that it reliably navigates towards better conditions. This is a general mechanism (though not an efficient one) that can serve as a final one needing nothing beyond it. It is not very efficient and does not scale to complex problems, hence the need for the control hierarchy.

  34. Nyctef says:

    I’m not sure if it’s related, but the themes (everything is built out of hierarchies, strong reaction against behaviourism) remind me a lot of Arthur Koestler’s The Ghost in the Machine, which is also a fascinating book.

  35. jooyous says:

    My communism thermostat is set to 70, so if the communism in a room starts getting higher than that, I start actively sabotaging people’s projects.

    Although, come to think of it, maybe people do maintain levels of social conversation dominance and center-of-attentionness type things during social interactions. That’s probably the highest level I can think of.

  36. David Condon says:

    Powers isn’t really a good source for information about behaviorist theories since he never had any relevant background in the field. Deprivation/satiation with respect to reinforcers was a well-known experimental result in scientific research, and multiple theories have been put forth to explain its mechanism of which PCT is just one. With respect to lower-order processes as you put it, the words you are looking for are “procedural knowledge.”

    • Richard Kennaway says:

      “Satiation” has always seemed to me to be a fake explanation that merely gives a name to the phenomenon. But I have no relevant background in the field. What are some of the theories that you say explain its mechanism?

      • David Condon says:

        You’re right that it is just a description of the phenomena, but in a sense, so is every other falsifiable claim. The ones that I know of would be Hull’s drive theory, Kantor’s setting factors, and Michael’s Establishing Operations; Michael’s theory being the one you’re most likely to encounter today.

        • Richard Kennaway says:

          Describing the observations is not a falsifiable claim, other than in the trivial sense that you might have written them down wrongly. Proposing an unobserved hypothetical mechanism that is responsible for these observations is a falsifiable claim: when you look for the proposed mechanism, it might not be there. What will a psychologist say when they encounter an example where “satiation” does not happen? I guess the conclusion will be simply that that situation does not manifest satiation. Nothing will have been refuted.

          I have encountered none of the examples you list, and after googling I see that this is because I am not a behavioral psychologist, or any other sort. On a superficial glance they do all look like little more than giving names to things. Other people here were saying that Behaviorism is dead, but apparently not.

          • David Condon says:

            Proposing an unobserved hypothetical mechanism is a description of expected future observations under certain conditions. Proposing an observed hypothetical mechanism is an argument that past observations will predict future observations. Whether or not the data has yet to be observed has nothing to do with whether or not the hypothesis is falsifiable. If satiation does not occur under conditions in which it previously occurred, this refutes past claims that satiation will occur under those conditions. Future experiments aren’t expected to refute anything because it has already been determined that satiation exists. If there weren’t thousands of studies demonstrating satiation, then whether or not satiation exists would still be an open question. There was a time when it was still an open question whether satiation actually exists; there was even a time when it was still an open question whether changing stimuli immediately after a response would predict future occurrences of that response. It’s been around so long that we don’t even think of it as a theory anymore, but yes, there was a time when it was considered plausible that the theory might be falsified.

  37. Hello Scott,

    I currently research in Robotics & AI and have come from Electrical Engineering which is a parent field to Control Theory.

    The first three sections of the post kept me very confused. I was thinking: “Why would Scott decide to write about this?” It’s not that control theory of the brain does not fit to reality. It’s because it is so obvious. However, at the same time it is not useful at all, because the resulting model has so many moving parts that it will not let you predict any new human behaviour anyway.

    Let me explain. I think pretty much every situation where an agent is trying to achieve some goal can be viewed as if it was a control system. In reinforcement learning you could view maximising reward as minimising the error. The actor constantly tries out actions that are expected to bring it to the goal. An optimisation process, like neural network training, tries out values of parameters that are expected to bring prediction error down.

    I mean: if you want an agent to get to a target state, it must be doing something like control.

    Control theory is useful because if you have parameters of a system, e.g. a robotic hand, then you can figure out what exact computations and motor actions you should complete to achieve some target state.

    Saying that brain acts as if it was a hierarchical control system is only useful if you can concretely pinpoint parameters of such a system. But the control theory brain model is so complex — the feedback loops go up and down the hierarchy, the dependencies change over time, system is implemented via neurons, hormones and so many others bodily mechanisms. There is no hope to ever learn the model in detail that allows one to make use of the mathematics of control theory. In order words, one is deluded that the problem was understood because it was given a name.

    But then I read the 4th section. And I totally see the point of proposing it at the time of behaviourism. When people are so confused about how brains work that they are ready to postulate even magic, then yes, this theory is helpful. However, this lens will not add anything new for anyone who had to think how to design agents that operate in the real world: thermostats, robots, AIs.

    To be fair: you can get some predictions out of the analogy even when you are missing parameters in your model. For example, AI safety researchers have no idea what AGI’s brain will look like but they reason abstractly using a sort of control theory:

    The AGI will have some goal. The AGI is very good at achieving its goals. Thus, you can assume that the world where AGI lives will be such that it satisfies AGI’s goals. And then you dream up what kinds of worlds have most paperclips.

    If what you liked in the proposed model of mind is its hierarchy, then you should look at Deep Neural Networks. I think they are far better analogy to human thinking than abstract control systems (neural nets are used to solve control theory problems and can be viewed as control systems). Obviously, they are inspired by human brains. There is lots of feedback between neuroscience and AI.

    Strong recommendation:
    If you want a super fast, high-level, crash course (1h) on deep neural nets I recommend this lecture from Yann LeCun, head of Facebook AI Research.

    • Richard Kennaway says:

      Define things vaguely enough and they will seem apply to anything and make no predictions.

      “Control” in the sense of control theory means more than just being getting to a target state. That is a result; control is a particular mechanism. Learning is not control. Maximisation is not control. Go that way and you end up saying that a falling rock is a control system. You end up saying that everything is a control system, and control system engineeers will wonder what you’re smoking.

      No, not everything is a control system.

      “Control theory is useful because if you have parameters of a system, e.g. a robotic hand, then you can figure out what exact computations and motor actions you should complete to achieve some target state.”

      That is not how control systems work. You can make control systems that work like that, but it is not necessary. What is necessary is that the control system be able to do something that gets closer to the target state. What could be less precise in its operation than the room thermostat? It makes no prediction, learns nothing, senses nothing but the current state of the controlled variable and the reference, and the only computation it makes it to compare the two. Yet it works: the temperature at the sensor remains close to the reference temperature, regardless of disturbances. None of those complications are a necessary part of a control system.

      • > “Control” in the sense of control theory means more than just being getting to a target state. That is a result; control is a particular mechanism.

        > No, not everything is a control system.

        I mostly agree. Though, I could contest that falling rock (together with physics governing it) is indeed a control system. I will use a more defensible example. You could easily say that a ball placed on a hillside behaves as if it was a control system — one that is designed for placing the ball at lowest points in a given area.

        Rolling ball is in principle similar to gradient descent, the process via which lots of machine learning is done today. The process of parameter optimisation, e.g. learning, behaves as if it was a control system. At a given point there is an error in the prediction, you use the error information to improve the model (learn). Maximisation is just gradient ascent rather than descent. You could probably find analogies between control theory and evolution and lots of other natural processes.

        I am not saying we should start calling all of these control systems. I mean the reverse: saying that something behaves as if it was a control system without specifying key parameters of the system does not help much.

        A control system is a very general thing that can behave in many different ways. Thus, when you learn that something is a control system, that may not dramatically decrease your hypothesis space of possible behaviours of that thing.

        For example, imagine that you lived among wolves for a year and you roughly know how they behave: get food, mate, etc. Now someone comes over and gives you an additional piece of info. Wolves are multi-objective hierarchical optimisers. Which low level objectives they choose to optimise for at a given time depends on the result of high level optimisations. How much did you learn? Which hypotheses were eliminated upon learning of this fact?

        Saying that people are multi-level control systems with feedback loops running up and down the hierarchy leaves so many free parameters that you will be able to explain any observation.

        I should say that it would be useful if we had a method to estimate parameters of this big model, e.g. which things are we controlling for, how are different controllers connected. But it seems to me that Powers mostly dreamed up the particular levels and ultimately doesn’t have more predictive power than competing theories.

        I don’t mean to be too negative. This is a big improvement on magical kinds of psychology. It was a step towards reductionism. Viewing problems from different perspective can open up new doors. But I don’t think you get a lot out of this analogy today.

        I’m not even totally sure this is a significant improvement on behaviourism. I don’t know too much about it but I’m confident someone at some point realised that we don’t crave for sugar all the time and that our desires are conditional and time-variant.

        > That is not how control systems work.

        This is a fair point. I meant that you need a model to predict how the system will behave (to simulate it, as we would like with humans). You indeed don’t need the model to build a system that can reach a target state.

        • Richard Kennaway says:

          “You could easily say that a ball placed on a hillside behaves as if it was a control system — one that is designed for placing the ball at lowest points in a given area.”

          Easy, but wrong. You cannot look at the result of a process, Texas sharpshooter fashion, and say the process “behaves as if” it was controlling for that result. Control is an inherently causal, counterfactual concept. To demonstrate that the result was or was not produced by a control process one must consider what would have happened under other circumstances. Would the process have varied itself in such a way as to still produce the result, in spite of forces pushing it away from that result? Iron filings near a magnet will have their attraction to it forestalled by a mere sheet of paper, but if Romeo finds the door to Juliet locked, he will go in by the window.

          The falling rock clearly fails this test. It will not resist the wind pushing it to land in a different spot, even if that spot is higher. It will not try to steer itself towards a rabbit hole that will let it fall lower.

          This is the test for determining the presence of a control process, when you do not know if one is present. Push on the variable you suspect may be under control. If it does not change, or changes far less that what you would expect given what you can see of the system, that is a sign that a control system may be present. If it passively yields to all manipulation, it is not under control. A ball in a bowl yields: its position is not under control. A ball nailed down might appear to be under control, until you discover the nail. Then it’s a ball in a bowl with a spring constant ten million times larger. But however strongly the wind pushes on a car, if it is not so strong as to overturn it, an experienced driver will keep it on the road. That is a control process. The designers of self-driving cars know this very well, psychologists, not so much.

          “Rolling ball is in principle similar to gradient descent, the process via which lots of machine learning is done today.”

          It is indeed similar. Gradient descent is not a control process. I do not know if anything resembling control finds employment in machine learning, but machine learning is not inherently a control process.

          “Saying that people are multi-level control systems with feedback loops running up and down the hierarchy leaves so many free parameters that you will be able to explain any observation.”

          The general idea of a control system does not provide you with the answers to any specific questions about what control systems are present in a given situation and what they are controlling, any more than the idea of atoms tells you what chemical elements exist. For that, you must go out and look (using, among other things, the test I described above).

          Compare your parable of the wolves with atomic theory as it existed in the 19th century. A sceptic about atoms says, go and observe one of our porcelain factories, how all the glazes are made and applied, and the pottery fired. You say this is all atoms combining and recombining, but which hypotheses were eliminated?

          “Viewing problems from different perspective can open up new doors. But I don’t think you get a lot out of this analogy today.”

          The application of control theory to living systems is not an analogy, any more than atoms are an analogy.

          The thing is, to people such as yourself with experience in control theory, that people control seems trivially obvious. To most psychologists and neuroscientists, though, it is not, and when present, poorly understood. Anytime you see the phrase “control of behaviour” (compare the title of the book under review) consider it a red flag. It is perceptions that are controlled, not behaviour. Behaviour varies in whatever way is necessary to produce the intended perception.

        • Controls Freak says:

          I could contest that falling rock (together with physics governing it) is indeed a control system. … You could easily say that a ball placed on a hillside behaves as if it was a control system — one that is designed for placing the ball at lowest points in a given area.

          What Richard said. Also, is there any reason to try to call this a control system rather than simply say that it’s a stable dynamical system? Otherwise, I think we’re committing the same error as when people try to talk about evolution in terms of, “Such and such was designed by evolution to do such and such.” You seem to be injecting an, “..it behaves as if it were designed for..” for no good reason.

  38. TheEternallyPerplexed says:

    Late to comment, still…
    New commenter here.
    > I think maybe there are some obvious parallels, maybe even parallels that bear fruit in empirical results, in lower level systems like motor control. Once you get to high-level systems like communism or social desirability, I’m not sure we’re doing much better than the police-as-control-system metaphor.

    I think the usefullness of a paradigm for a tier fades out fast with the tiers you go up (or down). Going, as Null Hypothesis said “from ‘so painfully obvious as to be beyond mentioning’ to, as you say, ‘abstract, unprovable, and unfalsifiable’ in an instant, once you get to the high-level stuff.” I’d call it emergent properties that can only be usefully modelled so far in a reductionist way.

    There is an interesting theory by Ezequiel Morsella with preliminary experiments, claiming that consciousness (I use it here as a stand-in for Powers’ high-level systems, hope that is not too misleading) is based on the clash of conflicting motor commands (with competing agendas), when they involve voluntary motion.
    Paper here: https://www.ncbi.nlm.nih.gov/pubmed/16262477
    Short overview by Peter Watts here: http://www.rifters.com/crawl/?p=791, from paragraph 3
    Experiments: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2440575/

    This could be a missing link from Powerian control systems (that would best fit to explain motor commands) to the stuff higher up. Morsella’s ‘parallel responses into skeletal muscle’ (‘PRISM’) provide the space for goal priority negotiation, together with awareness (as a not necessary side-effect?) – I guess that would be the next higher-tier system(s). …From where on upward new models are needed that allow things like self-awareness, guilt, communism, motivation, and a chocolade/BMI trade-off.

    Would a (sufficiently complex) AI ‘wake up’ if its programming makes subroutines compete for effector access?

  39. tardx says:

    This thread reminds me of the (apocryphal?) comment by an early astronaut (Alan Shepherd?) who, responding to the question of why put a man in space, said something like: “Man is the only adaptive control system with pattern-recognition capability that can be mass-produced by unskilled labor.”

  40. Archon says:

    This is the most computer scientist theory of psycology I have ever heard.
    Makes sense, considering he was a cybernetisist first, though.

    Seems like a pretty good theory, though, at least at the low level.

  41. Richard Kennaway says:

    “So I guess it’s important to see this as a product of its times. And I don’t understand those times – why Behaviorism ever seemed attractive is a mystery to me”

    Mystery or not, behaviorism is still practised. As a psychologist, you will have a better idea than me of where it fits into the landscape of contemporary psychology and whether anyone else pays attention to it, but they have their societies and journals (Society for/Journal of the Experimental Analysis of Behavior, Journal of Applied Behavior Analysis, Association for Behavior Analysis International, etc.) “Behavior analysis” means behaviorism, btw.

    In the last week a behaviourist has popped up in the comments on Andrew Gelman’s blog, and his (the commenter’s) style of writing is something I’ve seen before in other behaviourists, a certain condescension and self-aggrandisement, as if they have The Answers and are looking down from a lofty height on the stimulus-response automata around them. I have to wonder if some people’s inner experience (or lack of it) makes behaviourism seem as obvious to them as it seems bizarre to you, and to me. It is only accidentally a product of its time, when Skinner and Watson formulated the concept and made it public, but since then it has been sustained by those that it strikes a chord with.

    Here’s an example of behaviorism from 1989:

    “This study reports the results of an experiment with 4 female 5-year-old children, in which the verbal behavior of the children (talking to themselves) was studied under two conditions-an anthropomorphic toy condition and a nonanthropomorphic toy condition. The anthropomorphic condition consisted of three-dimensional toys such as dolls, stuffed animals, and figurines. The nonanthropomorphic toy condition consisted of two-dimensional materials such as puzzles, coloring books, and story books. The independent variables were the toy conditions. The dependent variables were verbal-behavior units; these included mands, tacts, intraverbals, autoclitics, and conversational units. The conditions were compared using a multiple schedule design. The results showed that more total units occurred in the anthropomorphic toy condition than in the nonanthropomorphic toy condition and that conversational units occurred in the anthropomorphic condition only.” (J.Exp.An.Beh. 1989, 51, 353-359)

    As someone wryly commented, “An excellent example which is very typical of behaviorist work on language. Children talk to their dolls but not their blocks.”

  42. warrenmansell says:

    Great to see this level and depth of discussion of Powers (1973). Since this great book, many different research teams and applied practitioners have been using the theory across almost every domain of the life and social sciences, with fascinating results. See pctweb.org.

  43. mindreadings says:

    This is an excellent review, Scott, especially considering your admission that you only partly understood the book. I’d say that the amount you understood was far greater than the amount I understood upon first reading, which, for me, was back in 1976 or so. It took me several years of reading whatever else I could find by Powers, doing a lot of hands-on research with the newly available microcomputer (my first was a Commodore 64 purchased back in 1977) and finally meeting with Powers himself (beginning in 1979) until I felt like I understood Powers’ theory, which is now called Perceptual Control Theory (PCT). When I get a chance I will try to respond in more detail to your review and hopefully clarify some of the points that you found difficult or confusing.

    Best regards

    Rick Marken