I.
Seeing Like A State is the book G.K. Chesterton would have written if he had gone into economic history instead of literature. Since he didn’t, James Scott had to write it a century later. The wait was worth it.
Scott starts with the story of “scientific forestry” in 18th century Prussia. Enlightenment rationalists noticed that peasants were just cutting down whatever trees happened to grow in the forests, like a chump. They came up with a better idea: clear all the forests and replace them by planting identical copies of Norway spruce (the highest-lumber-yield-per-unit-time tree) in an evenly-spaced rectangular grid. Then you could just walk in with an axe one day and chop down like a zillion trees an hour and have more timber than you could possibly ever want.
This went poorly. The impoverished ecosystem couldn’t support the game animals and medicinal herbs that sustained the surrounding peasant villages, and they suffered an economic collapse. The endless rows of identical trees were a perfect breeding ground for plant diseases and forest fires. And the complex ecological processes that sustained the soil stopped working, so after a generation the Norway spruces grew stunted and malnourished. Yet for some reason, everyone involved got promoted, and “scientific forestry” spread across Europe and the world.
And this pattern repeats with suspicious regularity across history, not just in biological systems but also in social ones.
Natural organically-evolved cities tend to be densely-packed mixtures of dark alleys, tiny shops, and overcrowded streets. Modern scientific rationalists came up with a better idea: an evenly-spaced rectangular grid of identical giant Brutalist apartment buildings separated by wide boulevards, with everything separated into carefully-zoned districts. Yet for some reason, whenever these new rational cities were built, people hated them and did everything they could to move out into more organic suburbs. And again, for some reason the urban planners got promoted, became famous, and spread their destructive techniques around the world.
Ye olde organically-evolved peasant villages tended to be complicated confusions of everybody trying to raise fifty different crops at the same time on awkwardly shaped cramped parcels of land. Modern scientific rationalists came up with a better idea: giant collective mechanized farms growing purpose-bred high-yield crops and arranged in (say it with me) evenly-spaced rectangular grids. Yet for some reason, these giant collective farms had lower yields per acre than the old traditional methods, and wherever they arose famine and mass starvation followed. And again, for some reason governments continued to push the more “modern” methods, whether it was socialist collectives in the USSR, big agricultural corporations in the US, or sprawling banana plantations in the Third World.
Traditional lifestyles of many East African natives were nomadic, involving slash-and-burn agriculture in complicated jungle terrain according to a bewildering variety of ad-hoc rules. Modern scientific rationalists in African governments (both colonial and independent) came up with a better idea – resettlement of the natives into villages, where they could have modern amenities like schools, wells, electricity, and evenly-spaced rectangular grids. Yet for some reason, these villages kept failing: their crops died, their economies collapsed, and their native inhabitants disappeared back into the jungle. And again, for some reason the African governments kept trying to bring the natives back and make them stay, even if they had to blur the lines between villages and concentration camps to make it work.
A favorite Seeing Like A State image: a comparison of street maps for Bruges (a premodern organic city) with Chicago (a modern planned city)
Why did all of these schemes fail? And more importantly, why were they celebrated, rewarded, and continued, even when the fact of their failure became too obvious to ignore? Scott gives a two part answer.
The first part of the story is High Modernism, an aesthetic taste masquerading as a scientific philosophy. The High Modernists claimed to be about figuring out the most efficient and high-tech way of doing things, but most of them knew little relevant math or science and were basically just LARPing being rational by placing things in evenly-spaced rectangular grids.
But the High Modernists were pawns in service of a deeper motive: the centralized state wanted the world to be “legible”, ie arranged in a way that made it easy to monitor and control. An intact forest might be more productive than an evenly-spaced rectangular grid of Norway spruce, but it was harder to legislate rules for, or assess taxes on.
The state promoted the High Modernists’ platitudes about The Greater Good as cover, in order to implement the totalitarian schemes they wanted to implement anyway. The resulting experiments were usually failures by the humanitarian goals of the Modernists, but resounding successes by the command-and-control goals of the state. And so we gradually transitioned from systems that were messy but full of fine-tuned hidden order, to ones that were barely-functional but really easy to tax.
II.
Suppose you’re a premodern king, maybe one of the Louises who ruled France in the Middle Ages. You want to tax people to raise money for a Crusade or something. Practically everyone in your kingdom is a peasant, and all the peasants produce is grain, so you’ll tax them in grain. Shouldn’t be too hard, right? You’ll just measure how many pints of grain everyone produces, and…
The pint in eighteenth-century Paris was equivalent to 0.93 liters, whereas in Seine-en-Montane it was 1.99 liters and in Precy-sous-Thil, an astounding 3.33 liters. The aune, a measure of length used for cloth, varied depending on the material(the unit for silk, for instance, was smaller than that for linen) and across France there were at least seventeen different aunes.
Okay, this is stupid. Just give everybody evenly-sized baskets, and tell them that baskets are the new unit of measurement.
Virtually everywhere in early modern Europe were endless micropolitics about how baskets might be adjusted through wear, bulging, tricks of weaving, moisture, the thickness of the rim, and so on. In some areas the local standards for the bushel and other units of measurement were kept in metallic form and placed in the care of a trusted official or else literally carved into the stone of a church or the town hall. Nor did it end there. How the grain was to be poured (from shoulder height, which packed it somewhat, or from waist height?), how damp it could be, whether the container could be shaken down, and finally, if and how it was to be leveled off when full were subjects of long and bitter controversy.
Huh, this medieval king business is harder than you thought. Maybe you can just leave this problem to the feudal lords?
Thus far, this account of local measurement practices risks giving the impression that, although local conceptions of distance, area, volume, and so on were different from and more varied than the unitary abstract standards a state might favor, they were nevertheless aiming at objective accuracy. This impression would be false. […]
A good part of the politics of measurement sprang from what a contemporary economist might call the “stickiness” of feudal rents. Noble and clerical claimants often found it difficult to increase feudal dues directly; the levels set for various charges were the result of long struggle, and even a small increase above the customary level was viewed as a threatening breach of tradition. Adjusting the measure, however, represented a roundabout way of achieving the same end.
The local lord might, for example, lend grain to peasants in smaller baskets and insist on repayment in larger baskets. He might surreptitiously or even boldly enlarge the size of the grain sacks accepted for milling (a monopoly of the domain lord) and reduce the size of the sacks used for measuring out flour; he might also collect feudal dues in larger baskets and pay wages in kind in smaller baskets. While the formal custom governing feudal dues and wages would thus remain intact (requiring, for example, the same number of sacks of wheat from the harvest of a given holding), the actual transaction might increasingly favor the lord. The results of such fiddling were far from trivial. Kula estimates that the size of the bushel (boisseau) used to collect the main feudal rent (taille) increased by one-third between 1674 and 1716 as part of what was called the reaction feodale.
Okay, but nobody’s going to make too big a deal about this, right?
This sense of victimization [over changing units of measure] was evident in the cahiers of grievances prepared for the meeting of the Estates General just before the Revolution. […] In an unprecedented revolutionary context where an entirely new political system was being created from first principles, it was surely no great matter to legislate uniform weights and measures. As the revolutionary decree read “The centuries old dream of the masses of only one just measure has come true! The Revolution has given the people the meter!”
Okay, so apparently (you think to yourself as you are being led to the guillotine), it was a big deal after all.
Maybe you shouldn’t have taxed grain. Maybe you should tax land. After all, it’s the land that grows the grain. Just figure out how much land everybody owns, and you can calculate some kind of appropriate tax from there.
So, uh, peasant villagers, how much land does each of you own?
A hypothetical case of customary land tenure practices may help demonstrate how difficult it is to assimilate such practices to the barebones scheme of a modern cadastral map [land survey suitable for tax assessment][…]
Let us imagine a community in which families have usufruct rights to parcels of cropland during the main growing season. Only certain crops, however, may be planted, and every seven years the usufruct land is distributed among resident families according to each family’s size and its number of able-bodied adults. After the harvest of the main-season crop, all cropland reverts to common land where any family may glean, graze their fowl and livestock, and even plant quickly maturing, dry-season crops. Rights to graze fowl and livestock on pasture-land held in common by the village is extended to all local families, but the number of animals that can be grazed is restricted according to family size, especially in dry years when forage is scarce. Families not using their grazing rights can give them to other villagers but not to outsiders. Everyone has the right to gather firewood for normal family needs, and the village blacksmith and baker are given larger allotments. No commercial sale from village woodlands is permitted.
Trees that have been planted and any fruit they may bear are the property of the family who planted them, no matter where they are now growing. Fruit fallen from such tree, however, is the property of anyone who gathers it. When a family fells one of its trees or a tree is felled by a storm, the trunk belongs to the family, the branches to the immediate neighbors, and the “tops” (leaves and twigs) to any poorer villager who carries them off. Land is set aside for use or leasing out by widows with children and dependents of conscripted males. Usufruct rights to land and trees may be let to anyone in the village; the only time they may be let to someone outside the village is if no one in the community wishes to claim them. After a crop failure leading to a food shortage, many of these arrangements are readjusted.
You know what? I’m just going to put you all down as owning ten. Ten land. Everyone okay with that? Cool. Let’s say ten land for everyone and just move on to the next village.
Novoselok village had a varied economy of cultivation, grazing, and forestry…the complex welter of strips was designed to ensure that each village household received a strip of land in every ecological zone. An individual household might have as many as ten to fifteen different plots constituting something of a representative sample of the village’s ecological zones and microclimates. The distribution spread a family’s risks prudently, and from time to time the land was reshuffled as families grew or shrunk…The strips of land were generally straight and parallel so that a readjustment could be made by moving small stakes along just one side of a field, without having to think of areal dimensions. Where the other side of the field was not parallel, the stakes could be shifted to compensate for the fact that the strip lay toward the narrower or wider end of the field. Irregular fields were divided, not according to area, but according to yield.
…huh. Maybe this isn’t going to work. Let’s try it the other way around. Instead of mapping land, we can just get a list with the name of everyone in the village, and go from there.
Only wealthy aristocrats tended to have fixed surnames…Imagine the dilemma of a tithe or capitation-tax collector [in England] faced with a male population, 90% of whom bore just six Christian names (John, William, Thomas, Robert, Richard, and Henry).
Okay, fine. That won’t work either. Surely there’s something else we can do to assess a tax burden on each estate. Think outside the box, scrape the bottom of the barrel!
The door-and-window tax established in France [in the 18th century] is a striking case in point. Its originator must have reasoned that the number of windows and doors in a dwelling was proportional to the dwelling’s size. Thus a tax assessor need not enter the house or measure it, but merely count the doors and windows.
As a simple, workable formula, it was a brilliant stroke, but it was not without consequences. Peasant dwellings were subsequently designed or renovated with the formula in mind so as to have as few openings as possible. While the fiscal losses could be recouped by raising the tax per opening, the long-term effects on the health of the population lasted for more than a century.
Close enough.
III.
The moral of the story is: premodern states had very limited ability to tax their citizens effectively. Along with the problems mentioned above – nonstandardized measurement, nonstandardized property rights, nonstandardized personal names – we can add a few others. At this point national languages were a cruel fiction; local “dialects” could be as different from one another as eg Spanish is from Portuguese, so villagers might not even be able to understand the tax collectors. Worst of all, there was no such thing as a census in France until the 17th century, so there wasn’t even a good idea of how many people or villages there were.
Kings usually solved this problem by leaving the tax collection up to local lords, who presumably knew the idiosyncracies of their own domains. But one step wasn’t always enough. If the King only knew Dukes, and the Dukes only knew Barons, and the Barons only knew village headmen, and it was only the village headmen who actually knew anything about the peasants, then you needed a four-step chain to get any taxes. Each link in the chain had an incentive to collect as much as they could and give up as little as they could get away with. So on the one end, the peasants were paying backbreaking punitive taxes. And on the other, the Royal Treasurer was handing the King half a loaf of moldy bread and saying “Here you go, Sire, apparently this is all the grain in France.”
So from the beginning, kings had an incentive to make the country “legible” – that is, so organized and well-indexed that it was easy to know everything about everyone and collect/double-check taxes. Also from the beginning, nobles had an incentive to frustrate the kings so that they wouldn’t be out of a job. And commoners, who figured that anything which made it easier for the State to tax them and interfere in their affairs was bad news, usually resisted too.
Scott doesn’t bring this up, but it’s interesting reading this in the context of Biblical history. It would seem that whoever wrote the Bible was not a big fan of censuses. From 1 Chronicles 21:
Satan rose up against Israel and caused David to take a census of the people of Israel. So David said to Joab and the commanders of the army, “Take a census of all the people of Israel—from Beersheba in the south to Dan in the north—and bring me a report so I may know how many there are.”
But Joab replied, “May the Lord increase the number of his people a hundred times over! But why, my lord the king, do you want to do this? Are they not all your servants? Why must you cause Israel to sin?”
But the king insisted that they take the census, so Joab traveled throughout all Israel to count the people. Then he returned to Jerusalem and reported the number of people to David. There were 1,100,000 warriors in all Israel who could handle a sword, and 470,000 in Judah. But Joab did not include the tribes of Levi and Benjamin in the census because he was so distressed at what the king had made him do.
God was very displeased with the census, and he punished Israel for it. Then David said to God, “I have sinned greatly by taking this census. Please forgive my guilt for doing this foolish thing.” Then the Lord spoke to Gad, David’s seer. This was the message: “Go and say to David, ‘This is what the Lord says: I will give you three choices. Choose one of these punishments, and I will inflict it on you.’”
So Gad came to David and said, “These are the choices the Lord has given you. You may choose three years of famine, three months of destruction by the sword of your enemies, or three days of severe plague as the angel of the Lord brings devastation throughout the land of Israel. Decide what answer I should give the Lord who sent me.”
“I’m in a desperate situation!” David replied to Gad. “But let me fall into the hands of the Lord, for his mercy is very great. Do not let me fall into human hands.”
So the Lord sent a plague upon Israel, and 70,000 people died as a result.
(related: Scott examined some of the same data about Holocaust survival rates as Eichmann In Jerusalem, but made them make a lot more sense: the greater the legibility of the state, the worse for the Jews. One reason Jewish survival in the Netherlands was so low was because the Netherlands had a very accurate census of how many Jews there were and where they lived; sometimes officials saved Jews by literally burning census records).
Centralized government projects promoting legibility have always been a two-steps-forward, one-step back sort of thing. The government very gradually expands its reach near the capital where its power is strongest, to peasants whom it knows will try to thwart it as soon as its back is turned, and then if its decrees survive it pushes outward toward the hinterlands.
Scott describes the spread of surnames. Peasants didn’t like permanent surnames. Their own system was quite reasonable for them: John the baker was John Baker, John the blacksmith was John Smith, John who lived under the hill was John Underhill, John who was really short was John Short. The same person might be John Smith and John Underhill in different contexts, where his status as a blacksmith or place of origin was more important.
But the government insisted on giving everyone a single permanent name, unique for the village, and tracking who was in the same family as whom. Resistance was intense:
What evidence we have suggests that second names of any kind became rare as distance from the state’s fiscal reach increased. Whereas one-third of the households in Florence declared a second name, the proportion dropped to one-fifth for secondary towns and to one-tenth in the countryside. It was not until the seventeenth century that family names crystallized in the most remote and poorest areas of Tuscany – the areas that would have had the least contact with officialdom. […]
State naming practices, like state mapping practices, were inevitably associated with taxes (labor, military service, grain, revenue) and hence aroused popular resistance. The great English peasant rising of 1381 (often called the Wat Tyler Rebellion) is attributed to an unprecedented decade of registration and assessments of poll taxes. For English as well as for Tuscan peasants, a census of all adult males could not but appear ominous, if not ruinous.
The same issues repeated themselves a few hundred years later when Europe started colonizing other continents. Again they encountered a population with naming systems they found unclear and unsuitable to taxation. But since colonial states had more control over their subjects than the relatively weak feudal monarchies of the Middle Ages, they were able to deal with it in one fell swoop, sometimes comically so:
Nowhere is this better illustrated than in the Philippines under the Spanish. Filipinos were instructed by the decree of November 21, 1849 to take on permanent Hispanic surnames. […]
Each local official was to be given a supply of surnames sufficient for his jurisdiction, “taking care that the distribution be made by letters of the alphabet.” In practice, each town was given a number of pages from the alphabetized [catalog], producing whole towns with surnames beginning with the same letter. In situations where there has been little in-migration in the past 150 years, the traces of this administrative exercise are still perfectly visible across the landscape. “For example, in the Bikol region, the entire alphabet is laid out like a garland over the provinces of Albay, Sorsogon, and Catanduanes which in 1849 belonged to the single jurisdiction of Albay. Beginning with A at the provincial capital, the letters B and C mark the towns along the cost beyond Tabaco to Wiki. We return and trace along the coast of Sorosgon the letters E to L, then starting down the Iraya Valley at Daraga with M, we stop with S to Polangui and Libon, and finish the alphabet with a quick tour around the island of Catanduas.
The confusion for which the decree is the antidote is largely that of the administrator and the tax collector. Universal last names, they believe, will facilitate the administration of justice, finance, and public order as well as make it simpler for prospective marriage partners to calculate their degree of consanguinity. For a utilitarian state builder of [Governor] Claveria’s temper, however, the ultimate goal was a complete and legible list of subjects and taxpayers.
This was actually a lot less cute and funny than the alphabetization makes it sound:
What if the Filipinos chose to ignore their new last names? This possibility had already crossed Claveria’s mind, and he took steps to make sure that the names would stick. Schoolteachers were ordered to forbid their students to address or even know one another by any name except the officially inscribed family name. Those teachers who did not apply the rule with enthusiasm were to be punished. More efficacious perhaps, given the minuscule school enrollment, was the proviso that forbade priests and military and civil officials from accepting any document, application, petition, or deed that did not use the official surnames. All documents using other names would be null and void
Similar provisions ensured the replacement of local dialects with the approved national language. Students were only allowed to learn the national language in school and were punished for speaking in vernacular. All formal documents had to be in the national language, which meant that peasants who had formally been able to manage their own legal affairs had to rely on national-language-speaking intermediaries. Scott talks about the effect in France:
One can hardly imagine a more effective formula for immediately devaluing local knowledge and privileging all those who had mastered the official linguistic code. It was a gigantic shift in power. Those at the periphery who lacked competence in French were rendered mute and marginal. They were now in need of a local guide to the new state culture, which appeared in the form of lawyers, notaries, schoolteachers, clerks, and soldiers.
IV.
So the early modern period is defined by an uneasy truce between states who want to be able to count and standardize everything, and citizens who don’t want to let them. Enter High Modernism. Scott defines it as
A strong, one might even say muscle-bound, version of the self-confidence about scientific and technical progress, the expansion of production, the growing satisfaction of human needs, the mastery of nature (including human nature), and above all, the rational design of social order commensurate with the scientific understanding of natural laws
…which is just a bit academic-ese for me. An extensional definition might work better: standardization, Henry Ford, the factory as metaphor for the best way to run everything, conquest of nature, New Soviet Man, people with college degrees knowing better than you, wiping away the foolish irrational traditions of the past, Brave New World, everyone living in dormitories and eating exactly 2000 calories of Standardized Food Product (TM) per day, anything that is For Your Own Good, gleaming modernist skyscrapers, The X Of The Future, complaints that the unenlightened masses are resisting The X Of The Future, demands that if the unenlightened masses reject The X Of The Future they must be re-educated For Their Own Good, and (of course) evenly-spaced rectangular grids.
(maybe the best definition would be “everything G. K. Chesterton didn’t like.”)
It sort of sounds like a Young Adult Dystopia, but Scott shocked me with his research into just how strong this ideology was around the turn of the last century. Some of the greatest early 20th-century thinkers were High Modernist to the point of self-parody, the point where a Young Adult Dystopian fiction writer would start worrying they were laying it on a little too thick.
The worst of the worst was Le Corbusier, the French artist/intellectual/architect. The Soviets asked him to come up with a plan to redesign Moscow. He came up with one: kick out everyone, bulldoze the entire city, and redesign it from scratch upon rational principles. For example, instead of using other people’s irrational systems of measurement, they would use a new measurement system invented by Le Corbusier himself, called Modulor, which combined the average height of a Frenchman with the Golden Ratio.
Also, evenly-spaced rectangular grids may have been involved.
The Soviets decided to pass: the plan was too extreme and destructive of existing institutions even for Stalin. Undeterred, Le Corbusier changed the word “Moscow” on the diagram to “Paris”, then presented it to the French government (who also passed). Some aspects of his design eventually ended up as Chandigarh, India.
A typical building in Chandigarh. The Soviets and French must have been kicking themselves when they realized what they’d missed out on.
Le Corbusier was challenged on his obsession with keeping his plan in the face of different local conditions, pre-existing structures, residents who might want a say in the matter, et cetera. Wasn’t it kind of dictatorial? He replied that:
The despot is not a man. It is the Plan. The correct, realistic, exact plan, the one that will provide your solution once the problem has been posited clearly, in its entirety, in its indispensable harmony. This plan has been drawn up well away from the frenzy in the mayor’s office or the town hall, from the cries of the electorate or the laments of society’s victims. It has been drawn up by serene and lucid minds. It has taken account of nothing but human truths. It has ignored all current regulations, all existing usages, and channels. It has not considered whether or not it could be carried out with the constitution now in force. It is a biological creation destined for human beings and capable of realization by modern techniques.
What was so great about this “biological creation” of “serene and lucid minds”? It…might have kind of maybe been evenly-spaced rectangular grids:
People will say: “That’s easily said! But all your intersections are right angles. What about the infinite variations that constitute the reality of our cities?” But that’s precisely the point: I eliminate all these things. Otherwise we shall never get anywhere.
I can already hear the storms of protest and the sarcastic gibes: “Imbecile, madman, idiot, braggart, lunatic, etc.” Thank you very much, but it makes no difference: my starting point is still the same: I insist on right-angled intersections. The intersections shown here are all perfect.
Scott uses Le Corbusier as the epitome of five High Modernist principles.
First, there can be no compromise with the existing infrastructure. It was designed by superstitious people who didn’t have architecture degrees, or at the very least got their architecture degrees in the past and so were insufficiently Modern. The more completely it is bulldozed to make way for the Glorious Future, the better.
Second, human needs can be abstracted and calculated. A human needs X amount of food. A human needs X amount of water. A human needs X amount of light, and prefers to travel at X speed, and wants to live within X miles of the workplace. These needs are easily calculable by experiment, and a good city is the one built to satisfy these needs and ignore any competing frivolities.
Third, the solution is the solution. It is universal. The rational design for Moscow is the same as the rational design for Paris is the same as the rational design for Chandigarh, India. As a corollary, all of these cities ought to look exactly the same. It is maybe permissible to adjust for obstacles like mountains or lakes. But only if you are on too short a budget to follow the rationally correct solution of leveling the mountain and draining the lake to make your city truly optimal.
Fourth, all of the relevant rules should be explicitly determined by technocrats, then followed to the letter by their subordinates. Following these rules is better than trying to use your intuition, in the same way that using the laws of physics to calculate the heat from burning something is better than just trying to guess, or following an evidence-based clinical algorithm is better than just prescribing whatever you feel like.
Fifth, there is nothing whatsoever to be gained or learned from the people involved (eg the city’s future citizens). You are a rational modern scientist with an architecture degree who has already calculated out the precise value for all relevant urban parameters. They are yokels who probably cannot even spell the word architecture, let alone usefully contribute to it. They probably make all of their decisions based on superstition or tradition or something, and their input should be ignored For Their Own Good.
And lest I be unfair to Le Corbusier, a lot of his scientific rational principles made a lot of sense. Have wide roads so that there’s enough room for traffic and all the buildings get a lot of light. Use rectangular grids to make cities easier to navigate. Avoid frivolous decoration so that everything is efficient and affordable to all. Use concrete because it’s the cheapest and strongest material. Keep pedestrians off the streets as much as possible so that they don’t get hit by cars. Use big apartment towers to save space, then use the open space for pretty parks and public squares. Avoid anything that looks like a local touch, because nationalism leads to war and we are all part of the same global community of humanity. It sounded pretty good, and for a few decades the entire urban planning community was convinced.
So, how did it go?
Scott uses the example of Brasilia. Brazil wanted to develop its near-empty central regions and decided to build a new capital in the middle of nowhere. They hired three students of Le Corbusier, most notably Oscar Niemeyer, to build them a perfect scientific rational city. The conditions couldn’t have been better. The land was already pristine, so there was no need to bulldoze Paris first. There were no inconvenient mountains or forests in the way. The available budget was in the tens of billions. The architects rose to the challenge and built them the world’s greatest High Modernist city.
It’s…even more beautiful than I imagined
Yet twenty years after its construction, the city’s capacity of 500,000 residents was only half-full. And it wasn’t the location – a belt of suburbs grew up with a population of almost a million. People wanted to live in the vicinity of Brasilia. They just didn’t want to live in the parts that Niemeyer and the Corbusierites had built.
Brasilia from above. Note both the evenly-spaced rectangular grid of identical buildings in the center, and the fact that most people aren’t living in it.
What happened? Scott writes:
Most of those who have moved to Brasilia from other cities are amazed to discover “that it is a city without crowds.” People complain that Brasilia lacks the bustle of street life, that it has none of the busy street corners and long stretches of storefront facades that animate a sidewalk for pedestrians. For them, it is almost as if the founders of Brasilia, rather than having planned a city, have actually planned to prevent a city. The most common way they put it is to say that Brasilia “lacks street corners,”by which they mean that it lacks the complex intersections of dense neighborhoods comprising residences and public cafes and restaurants with places for leisure, work, and shopping.
While Brasilia provides well for some human needs, the functional separation of work from residence and of both from commerce and entertainment, the great voids between superquadra, and a road system devoted exclusively to motorized traffic make the disappearance of the street corner a foregone conclusion. The plan did eliminate traffic jams; it also eliminated the welcome and familiar pedestrian jams that one of Holston’s informants called “the point of social conviviality
The term brasilite, meaning roughly Brasilia-itis,which was coined by the first-generation residents, nicely captures the trauma they experienced. As a mock clinical condition, it connotes a rejection of the standardization and anonymity of life in Brasilia. “They use the term brasilite to refer to their feelings about a daily life without the pleasures-the distractions, conversations, flirtations, and little rituals of outdoor life in other Brazilian cities.” Meeting someone normally requires seeing them either at their apartment or at work. Even if we allow for the initial simplifying premise of Brasilia’s being an administrative city, there is nonetheless a bland anonymity built into the very structure of the capital. The population simply lacks the small accessible spaces that they could colonize and stamp with the character of their activity, as they have done historically in Rio and Sao Paulo. To be sure, the inhabitants of Brasilia haven’t had much time to modify the city through their practices, but the city is designed to be fairly recalcitrant to their efforts.
“Brasilite,” as a term, also underscores how the built environment affects those who dwell in it. Compared to life in Rio and Sao Paulo, with their color and variety, the daily round in bland, repetitive, austere Brasilia must have resembled life in a sensory deprivation tank. The recipe for high-modernist urban planning, while it may have created formal order and functional segregation, did so at the cost of a sensorily impoverished and monotonous environment-an environment that inevitably took its toll on the spirits of its residents.
The anonymity induced by Brasilia is evident from the scale and exterior of the apartments that typically make up each residential superquadra. For superquadra residents, the two most frequent complaints are the sameness of the apartment blocks and the isolation of the residences (“In Brasilia, there is only house and work”). The facade of each block is strictly geometric and egalitarian. Nothing distinguishes the exterior of one apartment from another; there are not even balconies that would allow residents to add distinctive touches and create semipublic spaces.
Brasilia is interesting only insofar as it was an entire High Modernist planned city. In most places, the Modernists rarely got their hands on entire cities at once. They did build a number of suburbs, neighborhoods, and apartment buildings. There was, however, a disconnect. Most people did not want to buy a High Modernist house or live in a High Modernist neighborhood. Most governments did want to fund High Modernist houses and neighborhoods, because the academics influencing them said it was the modern scientific rational thing to do. So in the end, one of High Modernists’ main contributions to the United States was the projects – ie government-funded public housing for poor people who didn’t get to choose where to live.
I never really “got” Jane Jacobs. I originally interpreted her as arguing that it was great for cities to be noisy and busy and full of crowds, and that we should build neighborhoods that are confusing and hard to get through to force people to interact with each other and prevent them from being able to have privacy, and no one should be allowed to live anywhere quiet or nice. As somebody who (thanks to the public school system, etc) has had my share of being forced to interact with people, and of being placed in situations where it is deliberately difficult to have any privacy or time to myself, I figured Jane Jacobs was just a jerk.
But Scott has kind of made me come around. He rehabilitates her as someone who was responding to the very real excesses of High Modernism. She was the first person who really said “Hey, maybe people like being in cute little neighborhoods”. Her complaint wasn’t really against privacy or order per se as it was against extreme High Modernist perversions of those concepts that people empirically hated. And her background makes this all too understandable – she started out as a journalist covering poor African-Americans who lived in the projects and had some of the same complaints as Brazilians.
Her critique of Le Corbusierism was mostly what you would expect, but Scott extracts some points useful for their contrast with the Modernist points earlier:
First, existing structures are evolved organisms built by people trying to satisfy their social goals. They contain far more wisdom about people’s needs and desires than anybody could formally enumerate. Any attempt at urban planning should try to build on this encoded knowledge, not detract from it.
Second, man does not live by bread alone. People don’t want the right amount of Standardized Food Product, they want social interaction, culture, art, coziness, and a host of other things nobody will ever be able to calculate. Existing structures have already been optimized for these things, and unless you’re really sure you understand all of them, you should be reluctant to disturb them.
Third, solutions are local. Americans want different things than Africans or Indians. One proof of this is that New York looks different from Lagos and from Delhi. Even if you are the world’s best American city planner, you should be very concerned that you have no idea what people in Africa need, and you should be very reluctant to design an African city without extensive consultation of people who understand the local environment.
Fourth, even a very smart and well-intentioned person who is on board with points 1-3 will never be able to produce a set of rules. Most of people’s knowledge is implicit, and most rule codes are quickly replaced by informal systems of things that work which are much more effective (the classic example of this is work-to-rule strikes).
Fifth, although well-educated technocrats may understand principles which give them some advantages in their domain, they are hopeless without the on-the-ground experience of the people they are trying to serve, whose years of living in their environment and dealing with it every day have given them a deep practical knowledge which is difficult to codify.
How did Jacobs herself choose where to live? As per her Wikipedia page:
[Jacobs] took an immediate liking to Manhattan’s Greenwich Village, which did not conform to the city’s grid structure.
V.
The same thing that happened with cities happened with farms. The American version was merely farce:
We should recognize that the rationalization of farming on a huge, even national, scale was part of a faith shared by social engineers and agricultural planners throughout the world. And they
were conscious of being engaged in a common endeavor…They kept in touch through journals, professional conferences, and exhibitions. The connections were strongest between American agronomists and their Russian colleagues – connections that were not entirely broken even during the Cold War. Working in vastly different economic and political environments, the Russians tended to be envious of the level of capitalization, particularly in mechanization, of American farms while the Americans were envious of the political scope of Soviet planning. The degree to which they were working together to create a new world of large-scale, rational, industrial agriculture can be judged by this brief account of their relationship […]Many efforts were made to put this faith to the test. Perhaps the most audacious was the Thomas Campbell “farm” in Montana, begun – or, perhaps I should say, founded – in 1918 It was an industrial farm in more than one respect. Shares were sold by prospectuses describing the enterprise as an “industrial opportunity”; J. P. Morgan, the financier, helped to raise $2 million from the public. The Montana Farming Corporation was a monster wheat farm of ninety-five thousand acres, much of it leased from four Native American tribes. Despite the private investment, the enterprise would never have gotten off the ground without help and subsidies from the Department of Interior and the United States Department of Agriculture (USDA).
Proclaiming that farming was about 90 percent engineering and only 10 percent agriculture, Campbell set about standardizing as much of his operation as possible. He grew wheat and flax, two hardy crops that needed little if any attention between planting and harvest time.The land he farmed was the agricultural equivalent of the bulldozed site of Brasilia. It was virgin soil, with a natural fertility that would eliminate the need for fertilizer. The topography also vastly simplified matters: it was flat, with no forests, creeks, rocks, or ridges that would impede the smooth course of machinery over its surface. In other words, the selection of the simplest, most standardized crops and the leasing of something very close to a blank agricultural space were calculated to favor the application of industrial methods […]
This is not the place to chronicle the fortunes of the Montana Farming Corporation, and in any event Deborah Fitzgerald has done so splendidly. Suffice it to note that a drought in the second year and the elimination of a government support for prices the following year led to a collapse that cost J. P. Morgan $1 million. The Campbell farm faced other problems besides weather and prices: soil differences, labor turnover, the difficulty of finding skilled, resourceful workers who would need little supervision. Although the corporation struggled on until Campbell’s death in 1966,it provided no evidence that industrial farms were superior to family farms in efficiency and profitability.
But the Soviet version was tragedy. Instead of raising some money to start a giant farm and seeing it didn’t work, the USSR uprooted millions of peasants, forced them onto collective farms, and then watched as millions of people starved to death due to crop failure. What happened?
Scott really focuses on that claim (above) that farming was “90% engineering and only 10% agriculture”. He says that these huge farms all failed – despite being better-funded, higher-tech, and having access to the wisdom of the top agricultural scientists – exactly because this claim was false. Small farmers may not know much about agricultural science, but they know a lot about farming. Their knowledge is intuitive and local – for example, what to do in a particular climate or soil. It is sometimes passed down over generations, and other times determined through long hours of trial-and-error.
He gave the example of Tanzania, where small farmers grew dozens of different crops together in seeming chaos. Western colonists tried to convince them – often by force – to switch to just growing one thing at a time to reap advantages of efficiency, standardization, and specialization of labor. Only growing one crop in the same field was Agricultural Science 101. But this turned out to be a bad idea in the difficult Tanzanian environment:
The multistoried effect of polyculture has some distinct advantages for yields and soil conservation. “Upper-story” crops shade “lowerstory” crops, which are selected for their ability to thrive in the cooler soil temperature and increased humidity at ground level. Rainfall reaches the ground not directly but as a fine spray that is absorbed with less damage to soil structure and less erosion. The taller crops often serve as a useful windbreak for the lower crops. Finally, in mixed or relay cropping, a crop is in the field at all times, holding the soil together and reducing the leaching effects that sun, wind, and rain exert, particularly on fragile land. Even if polyculture is not to be preferred on the grounds of immediate yield, there is much to recommend it in terms of sustainability and thus long-term production.
Our discussion of mixed cropping has thus far dealt only with the narrow issues of yield and soil conservation. It has overlooked the cultivators themselves and the various other ends that they seek by using such techniques. The most significant advantage of intercropping, Paul Richards claims, is its great flexibility, “the scope [it] offers for a range of combinations to match individual needs and preferences, local conditions, and changing circumstances within each season and from season to season.” Farmers may polycrop in order to avoid labor bottlenecks at planting and at harvest.44Growing many different crops is also an obvious way to spread risks and improve food security. Cultivators can reduce the danger of going hungry if they sow, instead of only one or two cultivars, crops of long and short maturity, crops that are drought resistant and those that do well under wetter conditions, crops with different patterns of resistance to pests and diseases, crops that can be stored in the ground with little loss (such as cassava), and crops that mature in the “hungry time” before other crops are gathered. Finally, and perhaps most important, each of these crops is embedded in a distinctive set of social relations. Different members of the household are likely to have different rights and responsibilities with
respect to each crop. The planting regimen, in other words, is a reflection of social relations, ritual needs, and culinary tastes; it is not just a production strategy that a profit-maximizing entrepreneur took straight out of the pages of a text in neoclassical economics.
Nor could this be solved just by adding a pinch of empiricism. A lot of European farming specialists were into empiricism, sort of. What they ended up doing was creating crops that worked really well in a lab but not in actual Tanzania. If they were lucky, they created crops that worked really well on the one experimental farm in Tanzania they fenced off as a testing ground, but not on any other Tanzanian farms. If they were really lucky, they created crops that would grow on Tanzanian farms and be good on whatever single axis they were optimizing (like selling for lots of money) but not in other ways that were equally important to the populace (like being low-risk, or useful for non-food purposes, or resistant to disease, or whatever). And if they were supremely lucky, then they would go to the Tanzanians and say “Hey, we invented a new farming method that solves all your problems!” and the Tanzanians would say “Yeah, we heard rumors about that, so we tried it ourselves, and now we’ve been using it for five years and advanced it way beyond what you were doing.”
There were some scientists who got beyond these failure modes, and Scott celebrates them (while all too often describing how they were marginalized and ignored by the rest of the scientific community). But at the point where you’ve transcended all this, you’re no longer a domain-general agricultural scientist, you’re a Tanzanian farming specialist who’s only one white coat removed from being a Tanzanian farmer yourself.
Even in less exotic locales like Russia, the peasant farmers were extraordinary experts on the conditions of their own farms, their own climates, and their own crops. Take all of these people, transport them a thousand miles away, and give them a perfectly rectangular grid to grow Wheat Cultivar #6 on, and you have a recipe for disaster.
VI.
So if this was such a bad idea, why did everyone keep doing it?
Start with the cities. Scott notes that although citizens generally didn’t have a problem with earlier cities, governments did:
Historically, the relative illegibility to outsiders of some urban neighborhoods has provided a vital margin of political safety from control by outside elites. A simple way of determining whether this margin exists is to ask if an outsider would have needed a local guide in order to find her way successfully. If the answer is yes, then the community or terrain in question enjoys at least a small measure of insulation from outside intrusion. Coupled with patterns of local solidarity, this insulation has proven politically valuable in such disparate contexts as eighteenth-and early nineteenth-century urban riots over bread prices in Europe, the Front de Liberation Nationale’s tenacious resistance to the French in the Casbah of Algiers, and the politics of the bazaar that helped to bring down the Shah of Iran. Illegibility, then, has been and remains a reliable resource for political autonomy
This was a particular problem in Paris, which was famous for a series of urban insurrections in the 19th century (think Les Miserables, but about once every ten years or so). Although these generally failed, they were hard to suppress because locals knew the “terrain” and the streets were narrow enough to barricade. Slums full of poor people gathered together formed tight communities where revolutionary ideas could easily spread. The late 19th-century redesign of Paris had the explicit design of destroying these areas and splitting up poor people somewhere far away from the city center where they couldn’t do any harm.
The Soviet collective farms had the same dubious advantage. The problem they “effectively” “solved” was the non-collectivized farmers becoming too powerful and independent a political bloc. They lived in tight-knit little villages that did their own thing, the Party officials who went to these villages to keep order often ended up “going native”, and the Soviets had no way of knowing how much food the farmers were producing and whether they were giving enough of it to the Motherland:
Confronting a tumultuous, footloose, and “headless” rural society which was hard to control and which had few political assets, the Bolsheviks, like the scientific foresters, set about redesigning their environment with a few simple goals in mind. They created, in place of what they had inherited, a new landscape of large, hierarchical, state-managed farms whose cropping patterns and procurement quotas were centrally mandated and whose population was, by law, immobile. The system thus devised served for nearly sixty years as a mechanism for procurement and control at a massive cost in stagnation, waste, demoralization, and ecological failure.
The collectivized farms couldn’t grow much, but people were thrown together in artificial towns designed to make it impossible to build any kind of community: there was nowhere to be except in bed asleep, working in the fields, or at the public school receiving your daily dose of state propaganda. The towns were identical concrete buildings on a grid, which left the locals maximally disoriented (because there are no learnable visual cues) and the officials maximally oriented (because even a foreigner could go to the intersection of Street D and Street 7). All fields were perfectly rectangular and produced Standardized Food Product, so it was (theoretically) easy to calculate how much they should be producing and whether people were meeting that target. And everyone was in the same place, so if there were some sort of problem it was much easier to bring in the army or secret police than if they were split up among a million tiny villages in the middle of nowhere.
So although modernist cities and farms may have started out as attempts to help citizens with living and farming, they ended up as contributors to the great government project of legibility and taxing people effectively.
Seeing Like A State summarizes the sort of on-the-ground ultra-empirical knowledge that citizens have of city design and peasants of farming as metis, a Greek term meaning “practical wisdom”. I was a little concerned about this because they seem like two different things. The average citizen knows nothing about city design and in fact does not design cities; cities sort of happen in a weird way through cultural evolution or whatever. The average farmer knows a lot about farming (even if it is implicit and not as book learning) and applies that knowledge directly in how they farm. But Scott thinks these are more or less the same thing, that this thing is a foundation of successful communities and industries, and that ignoring and suppressing it is what makes collective farms and modernist planned cities so crappy. He generalizes this further to almost every aspect of a society – its language, laws, social norms, and economy. But this is all done very quickly, and I feel like there was a sleight of hand between “each farmer eventually figures out how to farm well” and “social norms converge on good values”.
Insofar as Scott squares the above circle, he seems to think that many actors competing with each other will eventually carve out a beneficial equilibrium better than that of any centralized authority. This doesn’t really mesh will with my own fear that many actors competing with each other will eventually shoot themselves in the foot and destroy everything, and I haven’t really seen a careful investigation of when we get one versus the other.
VII.
What are we to make of all of this?
Well, for one thing, Scott basically admits to stacking the dice against High Modernism and legibility. He admits that the organic livable cities of old had life expectancies in the forties because nobody got any light or fresh air and they were all packed together with no sewers and so everyone just died of cholera. He admits that at some point agricultural productivity multiplied by like a thousand times and the Green Revolution saved millions of lives and all that, and probably that has something to do with scientific farming methods and rectangular grids. He admits that it’s pretty convenient having a unit of measurement that local lords can’t change whenever they feel like it. Even modern timber farms seem pretty successful. After all those admissions, it’s kind of hard to see what’s left of his case.
(also, I grew up in Irvine, the most planned of planned cities, and I loved it.)
What Scott eventually says is that he’s not against legibility and modernism per se, but he wants to present them as ingredients in a cocktail of state failure. You need a combination of four things to get a disaster like Soviet collective farming (or his other favorite example, compulsory village settlement in Tanzania). First, a government incentivized to seek greater legibility for its population and territory. Second, a High Modernist ideology. Third, authoritarianism. And fourth, a “prostrate civil society”, like in Russia after the Revolution, or in colonies after the Europeans took over.
I think his theory is that the back-and-forth between centralized government and civil society allows scientific advances to be implemented smoothly instead of just plowing over everyone in a way that leads to disaster. I also think that maybe a big part of it is incremental versus sudden: western farming did well because it got to incrementally add advances and see how they worked, but when you threw the entire edifice at Tanzania it crashed and burned.
I’m still not really sure what’s left. Authoritarianism is bad? Destroying civil society is bad? You shouldn’t do things when you have no idea what you’re doing and all you’ve got to go on is your rectangle fetish? The book contained some great historical tidbits, but I’m not sure what overarching lesson I learned from it.
It’s not that I don’t think Scott’s preference for metis over scientific omnipotence has value. I think it has lots of value. I see this all the time in psychiatry, which always has been and to some degree still is really High Modernist. We are educated people who know a lot about mental health, dealing with a poor population who (in the case of one of my patients) refers to Haldol as “Hound Dog”. It’s very easy to get in the trap of thinking that you know better than these people, especially since you often do (I will never understand how many people are shocked when I diagnose their sleep disorder as having something to do with them drinking fifteen cups of coffee a day).
But psychiatric patients have a metis of dealing with their individual diseases the same way peasants have a metis of dealing with their individual plots of land. My favorite example of this is doctors who learn their patients are taking marijuana, refuse to keep prescribing them their vitally important drugs unless the patient promises to stop, and then gets surprised when the patients end up decompensating because the marijuana was keeping them together. I’m not saying smoking marijuana is a good thing. I’m saying that for some people it’s a load-bearing piece of their mental edifice. And if you take it away without any replacement they will fall apart. And they have explained this to you a thousand times and you didn’t believe them.
There are so many fricking patients who respond to sedative medications by becoming stimulated, or stimulant medications by becoming sedated, or who become more anxious whenever they do anti-anxiety exercises, or who hallucinate when placed on some super common medication that has never caused hallucinations in anyone else, or who become suicidal if you try to reassure them that things aren’t so bad, or any other completely perverse and ridiculous violation of the natural order that you can think of. And the only redeeming feature of all of this is that the patients themselves know all of this stuff super-well and are usually happy to tell you if you ask.
I can totally imagine going into a psychiatric clinic armed with the Evidence-Based Guidelines the same way Le Corbusier went into Moscow and Paris armed with his Single Rational City Plan and the same way the agricultural scientists went into Tanzania armed with their List Of Things That Definitely Work In Europe. I expect it would have about the same effect for about the same reason.
(including the part where I would get promoted. I’m not too sure what’s going on there, actually.)
So fine, Scott is completely right here. But I’m only bringing this up because it’s something I’ve already thought about. If I didn’t already believe this, I’d be indifferent between applying the narrative of the wise Tanzanian farmers knowing more than their English colonizers, versus the narrative of the dumb yokels who refuse to get vaccines because they might cause autism. Heuristics work until they don’t. Scott provides us with these great historical examples of local knowledge outdoing scientific acumen, but other stories present us with great historical examples of the opposite, and when to apply which heuristic seems really unclear. Even “don’t bulldoze civil society and try to change everything at once” goes astray sometimes; the Meiji Restoration was wildly successful by doing exactly that.
Maybe I’m trying to take this too far by talking about psychiatry and Meiji Restorations. Most of Scott’s good examples involved either agriculture or resettling peasant villages. This is understandable; Scott is a scholar of colonialism in Southeast Asia and there was a lot of agriculture and peasant resettling going on there. But it’s a pretty limited domain. The book amply proves that peasants know an astounding amount about how to deal with local microclimates and grow local varieties of crops and so on, and frankly I am shocked that anyone with an IQ of less than 180 has ever managed to be a peasant farmer, but how does that apply to the sorts of non-agricultural issues we think about more often?
The closest analogy I can think of right now – maybe because it’s on my mind – is this story about check-cashing shops. Professors of social science think these shops are evil because they charge the poor higher rates, so they should be regulated away so that poor people don’t foolishly shoot themselves in the foot by going to them. But on closer inspection, they offer a better deal for the poor than banks do, for complicated reasons that aren’t visible just by comparing the raw numbers. Poor people’s understanding of this seems a lot like the metis that helps them understand local agriculture. And progressives’ desire to shift control to the big banks seems a lot like the High Modernists’ desire to shift everything to a few big farms. Maybe this is a point in favor of something like libertarianism? Maybe especially a “libertarianism of the poor” focusing on things like occupational licensing, not shutting down various services to the poor because they don’t meet rich-people standards, not shutting down various services to the poor because we think they’re “price-gouging”, et cetera?
Maybe instead of concluding that Scott is too focused on peasant villages, we should conclude that he’s focused on confrontations between a well-educated authoritarian overclass and a totally separate poor underclass. Most modern political issues don’t exactly map on to that – even things like taxes where the rich and the poor are on separate sides don’t have a bimodal distribution. But in cases there are literally about rich people trying to dictate to the poorest of the poor how they should live their lives, maybe this becomes more useful.
Actually, one of the best things the book did to me was make me take cliches about “rich people need to defer to the poor on poverty-related policy ideas” more seriously. This has become so overused that I roll my eyes at it: “Could quantitative easing help end wage stagnation? Instead of asking macroeconomists, let’s ask this 19-year old single mother in the Bronx!” But Scott provides a lot of situations where that was exactly the sort of person they should have asked. He also points out that Tanzanian natives using their traditional farming practices were more productive than European colonists using scientific farming. I’ve had to listen to so many people talk about how “we must respect native people’s different ways of knowing” and “native agriculturalists have a profound respect for the earth that goes beyond logocentric Western ideals” and nobody had ever bothered to tell me before that they actually produced more crops per acre, at least some of the time. That would have put all of the other stuff in a pretty different light.
I understand Scott is an anarchist. He didn’t really try to defend anarchism in this book. But I was struck by his description of peasant villages as this totally separate unit of government which was happily doing its own thing very effectively for millennia, with the central government’s relevance being entirely negative – mostly demanding taxes or starting wars. They kind of reminded me of some pictures of hunter-gatherer tribes, in terms of being self-sufficient, informal, and just never encountering the sorts of economic and political problems that we take for granted. They make communism (the type with actual communes, not the type where you have Five Year Plans and Politburos and gulags) look more attractive. I think Scott was trying to imply that this is the sort of thing we could have if not for governments demanding legibility and a world of universal formal rule codes accessible from the center? Since he never actually made the argument, it’s hard for me to critique it. And I wish there had been more about cultural evolution as separate from the more individual idea of metis.
A final note: Scott often used the word “rationalism” to refer to the excesses of High Modernism, and I’ve deliberately kept it. What relevance does this have for the LW-Yudkowsky-Bayesian rationalist project? I think the similarities are more than semantic; there certainly is a hope that learning domain-general skills will allow people to leverage raw intelligence and The Power Of Science to various different object-level domains. I continue to be doubtful that this will work in the sort of practical domains where people have spent centuries gathering metis in the way Scott describes; this is why I’m wary of any attempt of the rationality movement to branch into self-help. I’m more optimistic about rationalists’ ability to open underexplored areas like existential risk – it’s not like there’s a population of Tanzanian peasants who have spent the last few centuries developing traditional x-risk research whom we are arrogantly trying to replace – and to focus on things that don’t bring any immediate practical gain but which help build the foundations for new philosophies, better communities, and more positive futures. I also think that a good art of rationality would look a lot like metis, combining easily teachable mathematical rules with more implicit virtues which get absorbed by osmosis.
Overall I did like this book. I’m not really sure what I got from its thesis, but maybe that was appropriate. Seeing Like A State was arranged kind of like the premodern forests and villages it describes; not especially well-organized, not really directed toward any clear predetermined goal, but full of interesting things and lovely to spend some time in.
Would the idea that everyone needs/should go to college and so we’ll make loans easy to take be an example of another high modernist failing in our modern era? Or am I stretching the definition/concept here?
Tricky question.
The subsidies seem to support your claim but on the other hand employers and prospective students are the ones driving the demand of degrees, and they are the ones holding any metis in this case. which would make the high-modernists those who claim we should tear down education since all our lab-room science shows it’s useless.
So this seems more like a case where the common people and the state are on the same side, and possibly shooting themselves in the foot.
How much metis is that, really? The drive towards college is a relatively recent phenomenon, compared to the other examples provided, and I’d hesitate to agree that there’s any sort of battle-tested equilibrium at this point.
Good point, I’d argue more than 100 years so there’s not much to go on. I think that puts us in a position where this just isn’t an issue that relates to existing knowledge vs. lazy rationality framing.
What about the argument that employers only care about degrees because it’s a way to exclude icky poor people without suffering an employment-equity lawsuit? Metis and responses to incentives look similar from the outside.
Aren’t they the same thing from the inside as well?
I mean, whether we are talking about people subconsciously or consciously not wanting to hire “icky poor people” they probably want to avoid it because they believe it would hurt them.
The higher-education-subsidy-abolitioners are still trying to change the current order because they Know What’s Best for other people. (except that davidp is right and no side here can claim it’s using lots of data)
Instead of getting data by looking back in time, why not look at other countries?
That seems unlikely to me. I would expect there to be some “status” value associated with college, but right now the gap between what the average college educated worker is paid and what the average worker without college is paid is huge, and has been rapidly growing larger and larger.
http://www.epi.org/publication/class-of-2016/
>Among young high school graduates, real (inflation-adjusted) average wages stand at $10.66 per hour—2.5 percent lower than in 2000. Young college graduates have average wages of $18.53—roughly the same as in 2000 (only 0.7 percent higher).
I have to think that there is some sense in which college is actually making workers more economically productive compared to workers without a college degree (speaking in relative terms, it may just be that people without college are much less economically valuable today). Otherwise I would not expect that gap to be that large; if a business could hire non-college workers and save something like 40% on it’s labor costs without losing economic productivity, I’d have to think that more would be, which would narrow the gap.
Like the joke Scott referenced about the economist saying “If there was a $20 bill on the ground someone would have picked it up by now”. If college educated workers really didn’t add much in terms of worker productivity or in some very significant way provide value to a company, then there are a heck of a lot of $20 bills businesses are just leaving on the ground, and I find that unlikely.
Could be they’re just more ambitious.
I’m sure that’s a part of it, there are obviously a lot of confounding effects here that make this hard to measure. But I think the fact that the gap between “college” and “non-college” is getting *bigger* is significant, and probably implies an actual increasing difference in the value of the worker.
Nah, I think it is just that the value of intelligence continues to rise as economies become more automated, so proxies for intelligence such as college also correlate better with income.
That argument:
1: Doesn’t fit the history. There have been plenty of poor people through out time. College degrees have only been in high demand relatively recently.
2: Doesn’t make any sense. The U.S. is not generally that classist to go ‘Ewww. Poor people!’. Certain extreme levels of poverty might trigger a disgust response, but that is a very small band of ‘poor’. Those that think otherwise are likely projecting.
College degrees came to prominence as an employment requirement because degrees are being used as a proxy for IQ tests.
That is not a fact. There no reason why IQ should be a better indication of ability to do job X than having done job X. It doesn’t even convey information about conscientious. And IQ testing is not widely used in countries where it is not a legal problem.
The obsession with IQ is one of the great irrationalites of the rationalsphere.
Remember also that in the US context, “poor” is often a euphemism for “minority”.
“That is not a fact. There no reason why IQ should be a better indication of ability to do job X than having done job X.”
Yeah, it is. Just because you don’t like it doesn’t mean that it isn’t true. The ban happened in the late 70s which is right before the college tuition started to skyrocket. While correlation isn’t causation, it does wink suggestively and say ‘look over here’. Having worked with a few hiring managers, I can tell you that the thought is going through their minds, regardless of what they put on paper.
Intelligence and conscientiousness are the two single greatest traits for predicting job performance. (as well as lifetime income) It is so critical that the U.S. military has set a lower bound on the IQ it will accept (yes, they use the ASVAB, but a rose by any other name). Also keep in mind that hiring decisions aren’t really about finding the best person, but the person for whom hiring is the easiest to defend. Which is why you see years and years of experience being required even though performance for most people has hit diminishing returns after 3 years.
“IQ testing is not widely used in countries where it is not a legal problem.”
Different cultures have different priorities. In the U.S. we love the interview even though it has been determined time and time again to be worthless.
Here in the UK we’ve had similar pushes to increase university enrolment, and as far as I’m aware it’s not illegal over here to set IQ tests for job applicants.
There’s no ban as such in the US.
There is a legal regime whose practical implementation causes people to greatly fear lawsuits should they implement IQ testing for prospective employees but to not fear lawsuits should they implement a (possibly informal) requirement for college degrees.
This is the sort of thing that many people will colloquially refer to as a “ban” if they don’t want to spell out an extra thirty words.
I dashed out the first reply on my lunch. Now I have a bit more time to reply in depth. Part of the problem is that several different assertions have been blended, so I will try to separate them and address each individually. I can only speak for the U.S., so in all my further statements, it can be assumed that I am talking about U.S. conditions, unless otherwise specified.
Are intelligence tests prohibited? It is technically correct to state that intelligence tests are not strictly prohibited. Such tests are (for a variety of reasons) a major legal minefield for a company to walk through, so most companies are loathed to use them.
Is intelligence a good predictor of job performance? Unequivocally yes. The more complex the job, the more intelligence matters. Obviously, you will hit diminishing returns fairly quickly on intelligence if you are hiring someone to dig ditches with a shovel. Intelligence isn’t the only thing that matters, but it is quite important. This is especially true here, where low complexity jobs are undergoing rapid extinction.
Is a college degree a useful proxy for intelligence? Yes. The average IQ of a college student is 115, which puts the typical college student at roughly 1 standard deviation above average.
Saying that having done the job is the best indicator of being able to do the job is about the equivalent of saying A = A. From a prediction standpoint, that is essentially a tautology. The problem is with the application.
Just because someone was working at a job doesn’t mean they were any good at it. The world is filled with people that are performing marginally competent work at best. So just because they were ‘doing the job’ doesn’t mean you want them doing the job for you. Productivity isn’t normally distributed, it is Pareto distributed. 80% of everything is crap, and that goes for people too. As an employer, you are essentially blind to someone’s prior job performance in all but the most extreme cases. So just give them a work sample you say? Well…
Complex jobs are very difficult/time consuming to model for a prospective worker. Such jobs frequently employ company specific (or industry specific) software/processes. This makes for a job that someone isn’t going to be able to just sit down and do. If it is a job they can sit down and do, and prospective employee produces output of value, you may be legally obligated to pay the person for the output. As any given model is likely to not have been extensively litigated, using a job model for employment decisions will open the employer up to litigation risk. So proving that someone can do the job tends to be an expensive proposition.
With that in mind, consider who is doing the hiring. If it is some HR lackey or low level manager, you are probably looking at someone whose primary goal is to be able to defend the hiring decision, and fill the slot with an acceptable hire with a minimum of time/expense, not pick the best hire.
As to other cultures/economies, there are complicating considerations. Just because something is effective doesn’t mean that it is used, as well as just because something is used doesn’t mean it is effective. Additionally, not everything is done in the same manner. Unless you have had an extremely unusual career or have some research to share, it is highly unlikely that you are in any position to speak authoritatively on how other countries handle hiring. China, for example, does extensive intelligence testing. They just call it the civil service exam. Japan does all their testing upfront in the school system. In both of those cases, if you were looking for the employer to present the test you would likely miss it, as the test already happened.
@Jason K.
Where are you getting this 115 number from? When I tried to find out the average IQ of an undergrad degree holder, it was 105. Given that more than 30% of the US population holds undergraduate degrees, wouldn’t it be impossible for the average to be 115, since that 115 is ~85th percentile?
Jason: So, how can you distinguish between, say, “College serves as a proxy for IQ tests” with the alternate hypothesis “More education tends to make workers more economically productive, especially at the higher-skilled positions that are well paid today, so giving someone a college education increases their value in the labor market”?
Well, we could try seeing if students learn anything at college, as a first pass, before seeing if it’s useful.
I’ve certainly heard of reports that they often learn virtually nothing the first two years.
If you are going to go “yeah, it is”, you need to follow it with something more proof-like than a suggestive wink.
The issue is how good it is in relation to other predictors that an HR department might want to use.
Does an IQ test give you information about conscientiousness and knowledge relevant to the role? No. So it is still looking like degrees are a better indicator than an IQ test alone.
What”s bad about tautologies? They don’t convey novel information. What’s good about tautologies? They are true.
Indeed. If you are writing research papers, you don’t want to announce the obvious truth that past job performance is a good guide to future job performance . But a relationship of almost tautological reliability is just what you need to select a good hire.
Just because someone has a high IQ doesn’t mean they are necessarily good at a given job., either. Your comment proves too much. We are in the domain of statistical evidence and probablistic reasoning here, not absolute logical necessity. We need evidence that one imperfect thing is better than another, because that is the kind of claim you are making.
Proves too much , or nothing.
The pro-IQ narrative is that IQ tests are more effective than alternatives, and that US employers would use them if allowed. The two claims in conjunction imply that US employers are competent, but you can’t prove that the US has the right hiring practices, absent government intervention,, and everyone else is incompetent just by noting that there are number of different cultures.
Scott actually looked at the research a while back and came to the conclusion that the claim “college teaches critical thinking skills” is to at least some extent backed up by the research, so I guess that’s one piece of evidence that some kind of learning takes place.
It’s ironic that IQ came up in a discussion about bad rationality, because it’s bad rationality. James Scott criticises rational forestry,with a single specie of tree planted at regular intervals. The pro-IQ case likewise pushes a single-dimensional approach, and likewise deprecates the complexity of actual hiring practices.
I’ve heard a variant on that argument that holds that the gatekeepers aren’t animated by animus toward the icky poor people as much as by creating a glass floor for the yucky offsprings of the rich people. Along with “glass floor” another phrase used to describe the alleged phenomenon is “opportunity hoarding”.
I suspect employers’ demand for degrees is a symptom of absence of a different proxy for fitness for work environment, which largely was regulated out by discrimination, etc. concerns. It *may* have weeded out *some* discrimination, but on the way it replaced a set of cheap tests or “gut feelings” with a degree costing tens of thousands of dollars (at best) and years of time and effort, which is mostly wasted effort for an employer except for bits of knowledge that could be taught in several months and general “can consistently show up, follow instructions and exercise basic judgement” metrics.
As an evidence for that, I see the growth of places where degree is required now where it wasn’t before (kind of anecdotal, but seen this a lot). And growth of degrees which I can’t see any direct business demand for – e.g. do we really need 3x gender studies majors in business than in the 80s-90s? What for specifically?
Here’s what one college promises: https://wgs.tcnj.edu/for-students/careers-and-graduate-school/what-you-can-do-with-a-wgs-major/
Would you believe business now needs 3x people with deep, 3-years-degree worth, insight into different forms of oppression than it needed 20-30 years ago? Needs so much it’s worth tens of thousands of dollars spent on it? I personally have trouble believing it.
@ MostlyCredibleHulk:
I actually would believe that. Consider that there have been times and places where educated folk were expected to know Latin or Greek. If a substantial fraction of people doing advanced work in your field speak Latin and you don’t, you’re at a disadvantage. If you lived in San Diego you’d want to know Spanish to maximize income and hire-ability; if you lived in my neighborhood of Brooklyn (Greenpoint), you’d want to know Polish.
So today there exist many people who enjoy regularly speaking another language, the language of Social Justice. It’s a language that’s largely about being offended and taking offense. A modern business needs to hire some people who speak this new language in order to do business with and avoid offending all the other people who speak this language. Having the right number of SJ native speakers on your Board, working as middle managers, working in the HR department or the PR department makes it easier to do business without getting sued or slimed by other companies and individuals. Just like every company needs lawyers to protect against other lawyers, every company needs SJ people to protect against other SJ people.
Unlike lawyers, though, you can’t AIM SJ people. Your own SJ people are as likely (if not more likely) to attack your organization as they are to defend it. Perhaps there needs to be some sort of ethical code and professional exams and such.
I think this indirectly is a high modernist failing, but a failing by induction. The real impact modernism had on our education system is in universal public (elementary and secondary) schooling. It’s hard to spend five minutes in a public school (at least in the US) without being aware of the modernist influences: the architecture, the bathroom passes, the Kafkaesque atmosphere. Even the idea that education should take place in an assigned building at assigned times, with factory bells ringing to mandate a change in subjects—the widespread acceptance of this idea is one of modernism’s greatest triumphs.
Once modernism persuades you that universal education is mandated, I can see people making the leap to universal higher education.
Of course, the spread of an implicit requirement for college degrees is more complicated than this one factor (while the spread of mandatory elementary and secondary education is not more complicated; it’s modernism all the way down), but I can see how it would be a factor.
Every time I go in for jury duty I get weirded out because it’s probably the closest environment to high school that a middle-class adult can experience, short of prison or maybe some parts of the military. And after reading this, the similarities are all recognizable as high modernism.
I suspect that the commoness of anxiety dreams about school says that there’s something wrong with the way education is done.
Why so much anxiety when the punishments are frequently so minor? We’re not talking about PTSD here, I think. It’s something else, but what?
Minor? Maybe it’s changed in the current snowflake age, but when I went to school, the whole system tried to convince you that failure in school meant failure in life. Do poorly in school, don’t get into college, end up poor.
PTSD is usually about urgent current fear. School cranks up anxiety, but it’s oriented towards a relatively distant future.
Maybe I just don’t have a good sample, but I think people have more nightmares about not knowing where the exam they haven’t studied for is located than they do about being fired, even though being fired can cause more damage.
PTSD, at least the “shellshock” variant, is normally about an extended period of high stress (life in a war zone) punctuated by periods of extreme stress (combat, being shelled). School is an extended period of lower stress, punctuated by periods of higher stress. And those in school don’t have much perspective on how much stress they _should_ be feeling, and schools and parents tend to crank this up in order to achieve compliance.
Being fired is a single event which tends to happen to adults with more sense of perspective, and it’s presumably much less stressful than being shelled (I’ve never been shelled). I had nightmares about it, but they faded.
I would say yes. After all, when you get down to it, a degree is simply a way of making one’s knowledge legible to large, bureaucratic organizations. That includes corporations as well as states.
Right, and usually not even knowledge. Look at the entry-level positions that require a degree, but not in any specific field. They don’t care what you learned in college, they care that you made it through a filter for a minimum level of (IQ plus conscientiousness).
Perhaps, or perhaps there are more generalisable skills that people tend to learn in college which make them more productive across a wide range of modern careers. Like advanced reading comprehension and analysis of difficult texts, clear writing skills, the class of skills known as “critical thinking” skills, a basic understanding of many disciplines and subjects on at least a 101 level, ect.
Most econonists tend to assume that in general better education leads to increased worker productivity in some vague way, and it seems likely to me that at least some of that is happening here.
If that was actually true, there’d be a wage premium for people who attended college for a long time, but didn’t get a degree, over people who never went to college at all. There isn’t one; essentially all that matters is the degree.
Paul: And there is. People who have “some college” earn more than people with “just a high school degree” but less than people with a BA. They also have a lower unemployment rate.
http://www.pewsocialtrends.org/2014/02/11/the-rising-cost-of-not-going-to-college/
Now, it’s not a huge difference, most of the gain does happen between “some college” and “a BA”. Which isn’t surprising, the binary question of “does x person have a college degree or not” is much easier to measure, which is an advantage.
But again, the gap between college and non-college students are actually getting larger. If you’re a believer in the Efficient Market Hypothesis, it seems very likely that the actual value to the employer of a more educated worker vs a less educated worker in terms of worker productive must be significant, and probably increasing.
Unless I’ve missed it, the discussion so far hasn’t included the general sorting explanation–that under current circumstances, people who go to college are the sort of people more likely to succeed with or without college. “Substitute for IQ tests” is only a part of that.
I saw a summary of one study that tested a variant, not college for no-college but elite college/university vs non-elite. They looked at people accepted by an elite school who chose to go to a non-elite one. The conclusion was that they got a lower salary in their first job than those who went to the elite school, but did as well thereafter, which suggests that employers take the elite degree as evidence of quality, but weaker evidence than is provided by actual performance.
I’m pretty sure I have seen figures showing that most of the difference is from the degree, that someone who completed most of a college education but dropped out near the end is much closer to someone who didn’t go to college, in terms of later income, than to someone who made it all the way. That’s evidence against the human capital model of education.
In New Zealand now, if a farmer has a child who wants to take over the family farm, the child goes to a university and gets a degree in agriculture. That’s the strongest argument I know for tertiary education as being genuinely valuable.
Or at least tertiary education at New Zealand agricultural schools.
It is that last sentence that is the key. Presumably there is a lot of academic knowledge that is useful for running a farm. It’s a good thing when kids go and obtain that knowledge. It does not follow that it is a good thing for the average suburban kid to automatically get a college degree in ..something.. just because that is what he’s supposed to do.
I have direct experience in this myself. I received a four year degree in Accounting because I needed that piece of paper to get a job. Maybe about a years’ worth of my classes were actually useful in my ultimate job. Later, in mid-career I was thrown into a situation where I didn’t know what I was doing. I went back to college and got a a master’s in business taxation. That master’s degree took about 1/4 the time as my bachelor’s degree, but was about 10 times as useful as far as knowledge obtained. So it is certainly true that college can be very useful in obtaining useful knowledge. Unfortunately that is not the most common usage of a college degree today — it is instead a signal to employers that you are a conforming, tenacious, and smart member of society, able to handle more complex tasks. Any thing learned at college is a bonus but besides the point.
It struck me that liberal democracies have solved the probkem
I doubt there is a self-aware high modernist ethos at work there, but I could swear something like 10-15 years ago TIAA-CREF was using “greater good” as an advertising slogan.
One pretty cool collection of Le Corbusier architecture can be found at the UrbanHell subreddit.
If you haven’t read it, you might be interested in Voltaire’s Bastards by John Ralston Saul. There’s a lot wrong with it, but I think the basic claim is sound: essentially, that reason is a tool, not a goal, and worshipping it for its own sake leads to monstrosities of all sorts. Some very interesting analyses of this in fields as diverse as military tactics, fashion design, and urban planning.
I was waiting for the punchline that our Scott has given up leftism for libertarianism as a result of this book, but alas.
James Scott seems pretty leftist. I understand he has interesting thoughts about anarchy that he debated with David Friedman, but I’m putting off hearing them because I would have to watch a video.
It may just be leftist infighting, but it’s curious how whenever you disagree with a leftist or take sides in a disagreement between two leftists, the disagreement is almost always directed in the libertarian direction, and not in the direction of either more extreme leftists or conservatives (except where that overlaps with libertarians).
I would think that’s a sign that libertarianism has some merit to it.
Yes, it has some merit to it. But so does High Modernism, and for that matter kabbalah and astrology.
If you go looking for all the stuff that’s good about libertarianism, then you’ll find a lot of good stuff. But the same goes if you go looking for all the good stuff about statism, or all the bad stuff about libertarianism.
I think you’re stuck on an overly simple model of the political universe. James Scott is an anarchist, which is a left-wing political ideology concerned with decentralization, freedom, and personal choice. It’s not surprising that this has resonances with libertarianism, which is a right-wing ideology concerned with decentralization, freedom, and personal choice.
What makes one “right” and one “left?”
Left emphasizes liberty, equality, and brotherhood. Right emphasizes order, tradition, and justice.
Two related issues:
1) Economic equality. Anarchism takes as a primary goal economic equality, while libertarianism sees inequality as an acceptable price to pay for freedom.
2) Private property. Anarchism sees private property as a form of authoritarian control and part of the problem, libertarianism sees private property as a natural right and part of the solution.
The right supports hierarchy, the left seeks to tear it down.
I’d phrase it more as “the right supports traditional sources of authority, and the left wants to tear them down”.
I don’t think James Scott is so much an anarchist as an anarchist sympathizer. His book on the subject, not as good as the other two books, is Two Cheers for Anarchy.
My conclusion from The Art of Not Being Governed was he thought anarchy had been viable in Southeast Asia for a long time, no longer was. And he didn’t seem very interested in the internals, in how stateless societies dealt with problems that states supposedly existed to deal with.
Just to note that what you are describing as “anarchism” in this comment I would describe as a particular version of anarchism, and one I believe has serious internal problems. I have considered myself an anarchist for something over fifty years, as do many (but a minority of) other libertarians.
Anarchism takes as a primary goal economic equality, while libertarianism sees inequality as an acceptable price to pay for freedom.
One could equally say that anarchism takes as a primary goal economic inequality, and libertarianism sees equality as the consequence of freedom, by using the right-wing definition of equality (of opportunity) instead of equality of results. Anarchist economic equality would require shutting down a lot of opportunities for many people on the grounds they would produce inequality — and that’s without going into the inequality that enables people to shut them down.
There are almost as many flavors of anarchism as there are anarchists or anarchist sympathizers. small ‘L’ libertarians mostly believe that the WSPQ is a better description of where they fit with respect to the left-right spectrum. The answer is that they’re orthogonal, and ‘real’ libertarians are neither left nor right. The same pretty much applies to modern anarchists.
Of course, there are left anarchists, (James Scott is probably one) and the libertarian-leaners in congress are mostly on the right (because the R-party has room for them, and the D-party doesn’t seem to.)
Did you read the forward of the book? He actually felt a need to explicitly disavow accusations that he had become one of those filthy libertarians before the book even starts.
I’m not sure who you’re responding to. Indentation indicates that you’re responding to Scott, which doesn’t make any sense, but if you’re replying to me, I’m talking about whether our Scott is moving in a libertarian direction, not the book’s Scott.
I was responding to the assertion that “James Scott seems pretty leftist”. He’s very leftist, and I find it amusing that he’s so leftist he felt he had to write a preemptive disavowal of the rightists (and I am one of them) who will read and quote from his book.
My conclusion from debating James Scott and talking with him was that on any subject he has not thought much about he has conventional left-wing views, on subjects he has actually paid attention to he has interesting views that may well be correct and are generally not consistent with conventional center-left views.
One of the things that struck me reading his books before I met him was that he was saying things many of which would appeal to libertarians, but went to some trouble to make it clear that he wasn’t one of those icky right wingers. That included, I think in a footnote, saying something negative about Hayek at a point when he was agreeing with him.
People may be interested in my blog post after the debate.
Someone else in this thread was also claiming that James Scott seems to mostly agree with Hayek but feels the need to disagree with and criticise Hayek to distinguish from the right. The suggestion seems to be that his disagreement is more a form of signalling than based on any actual argument.
I’m not sure how you could come to this conclusion unless you read his work very uncharitably. He’s quite clear that he’s largely in agreement with Hayek on the idea of localised complexity and the difficulty of comprehending or altering this from the outside, but he disagrees that capitalism and the market are the solution because he believes they entail a similar simplification to that of state projects and that the development of industrial capitalism required state coercion to create the conditions that would make it possible.
You may disagree with him but it seems extremely uncharitable to argue that he only thinks this because he doesn’t want to be right wing. It’s also an argument that can be just as justifiably turned back at you – you have come to the correct conclusion on some things and only disagree with James Scott about capitalism and markets because you don’t want to be seen as a stupid left-winger. There probably is some grain of truth to these arguments (tribes and all that) but arguing that your opponent would see how correct you are if she/he could just give up their tribal biases isn’t normally a good way of moving debate forward.
That’s not what I am saying. He doesn’t want to be seen as right wing because he identifies with the left wing position on various issues he has not thought much about. Given that he identifies as left wing and realizes that the ideas he is actually writing about are ones that sound right wing, he wants to make it clear to readers that he isn’t actually right wing.
I don’t really see a large difference between the position I ascribed to you and your clarification of your position. If anything it seems more uncharitable to say that he’s left wing because he hasn’t thought things through properly. I’d be interested to know which particularly left-wing positions you think he holds that he hasn’t thought through. Part of the book is devoted to arguing that capitalism entails a similar simplification and requires state coercion to run roughshod over local variation and complexity. Now, if you are fervently pro-capitalism you will probably find this part of his argument extremely unconvincing but that doesn’t mean he hasn’t thought about it.
It’s interesting that you think that the ideas he is writing about sound right wing. To me, and I suspect to most left wing people in academia in the humanities or “softer” social sciences they sound like they are based on arguments in this tradition. I read it and thought of Karl Polanyi, Michel de Certeau, Henri Lefebvre, Foucault (and other parts of post ’68 French academia) among others. But most of all his argument seemed deeply anthropological to me and James Scott is himself-at least in part-an anthropologist. Anthropologists publish paper after paper looking at how some external body (often a government but also potentially an NGO or some international body) try to enforce supposedly beneficial changes on a complex local situation that they do not understand. In fact, the argument he makes is very similar to the typical anthropological critique against more reductive forms of social science – namely that they involve massive simplifications of far more complex social realities. When Scott describes old cities as being more “deep” or “thick” than newly created ones he is using exactly the same vocabulary that anthropologists use to distinguish the studies they produce from that of other social sciences.
I don’t read Scott arguing against Hayek as him feeling the need to distinguish himself from the right and I don’t think many people in Scott’s academic tribe would be in danger of seeing the arguments he puts forth as right wing. I think he argues against Hayek because he disagrees with him and has fairly clear arguments as to why he disagrees with him. If you cite someone who has some similar arguments to yours but from whom you differ profoundly on central questions, it would be more strange to not point to the differences.
@Art Vandelay:
My comment on what James Scott is leftist about was not based on the book but on the debate and conversation with him. The debate is webbed. I don’t now remember how much of the relevant bits came up there.
What I was complaining about was not an argument but a footnote saying something negative about Hayek irrelevant to the mention of Hayek it was a footnote to.
I’ve pointed James Scott at this post and thread, so with luck he may be tempted to join in.
Scott Alexander’s review of James Scott prompted me to start re-reading Seeing Like a State and the comments in this thread on what he had to say on Hayek prompted me to look up the points where he references Hayek. Without meaning to be rude your memory must be failing you.
There is only one occasion where he mentions Hayek in the main body of the text and then follows it up in the footnote. On page 256 he writes:
Which leads to the footnote:
Which is quite clearly an argument against Hayek’s position rather than a negative irrelevance.
And perhaps he has many left-wing views that he hasn’t spent much time thinking lucidly about which he discussed in a private conversation with you (or perhaps in his debate with you, but I’ve watched it and although I found his presentation less convincing than I expected it didn’t strike me that he was repeating standard left-wing ideas without giving them any proper thought). It’s entirely possible. But seeing as he makes thought-through left-wing arguments in published works of his, it seems much more reasonable to conclude that he is left-wing on the basis of the thought-through arguments that he publishes rather than the half-baked ones he mentioned in one private conversation.
Certainly possible. What I was probably remembering was a comment on p. 102 where he is agreeing with a point of Hayek’s without naming him, followed by footnote 51 which reads:
“This point has been made forcefully and polemically in the twentieth century by Friederich Hayek, the darling of those opposed to postwar planning and the welfare state. See, especially, The Road to Serfdom (Chicago: University of Chicago Press, 1976).
I think it is fair to say that the language of the footnote implies hostility to Hayek in the context of observing that Hayek made the same point Scott has just made–and made it long before Scott did.
But thanks for correcting my imprecise memory of the passage.
…when on earth was Scott ever a leftist?
Scott’s always been a leftist. The fact that you didn’t notice it is an example of the very phenomenon I was referring to–he constantly disagrees in the libertarian direction to the point where you wonder why he doesn’t just join them. It’s as if libertarianism was something you could get to by being rational even when your own ideology is opposed to it.
I still see a lot of Scott’s ideas as liberal ends sought by conservative means.
Scott’s a liberal, not a leftist. There’s a difference.
Right — to be clear, I realize that some people use “leftist” in a way that is inclusive of “liberal” (or vice versa), but I think this is a confusing usage and should be discouraged. Leftism proper has little in common with liberalism other than that they’re both opposed to the “right”.
It also depends on if you mean “liberal” in the classic sense or the modern sense. Classical liberals are more right than left in the modern era(though Trump may change that).
Classical liberals are pretty much the 19th century version of libertarianism. To put it the other way around, the opponents of liberalism stole its name, so we had to steal “libertarian” from left anarchists (and before them believers in the doctrine of free will) to label ourselves with.
The only substantial difference I can see is that classical liberals were in favor of expanding the franchise and modern libertarians have no particular position on that subject.
But libertarianism is the last High Modernist political ideology standing. Libertarianism has a rational model of how society should be ordered and any local knowledge about the practical necessity of regulation or public ownership of goods should be ignored. Definitely pre-Modern societies like Medieval Europe were not in any way remotely libertarian.
Libertarianism may be more compatible with local knowledge than some ways of “rationally” organizing society, but that doesn’t mean it’s not throwing away centuries of built up metis about how to run a nation state in the modern world.
High Modernism is about central authority making rules for the whole of society. Libertarianism is about prohibiting any central authority strong enough to do that from existing. You’re using at least one of those terms in a very different way than most people do.
Zero is a number just as much as 1 million is. Saying no community can address a problem through public action is an incredibly intrusive law to impose on all society. Libertarianism is about going and saying we know better than anyone who has tried to address a societal problem before and the answer is always less regulation or community structure. Most of the local solutions that are described here would not be permitted under a libertarian legal system. The traditional solutions are almost all about the local community overriding the private property rights of individuals in order to promote long term stability.
I think this response stays true if we simplify to “Most of the local solutions that are described here would not be permitted under a […] legal system.”
Talking about how we get to medieval French township farms from 21st century America is basically a red herring – everything is so different that even radical changes wouldn’t create that outcome. Libertarianism offers to radically slash away one set of restraints set by modern society. Socialism offers to slash away a different set of impediments. None of them (when being vaguely honest) even imply they’ll produce tribal/communal outcomes, just different modernized outcomes.
I suppose I don’t think libertarianism will get us back to some 16th century way of life, but I certainly don’t think it’s “the last High Modernist political ideology standing”. Neither socialism nor state capitalism are dead, and pretty much everything going is recognizably High Modernist.
> no community can address a problem through public action is an incredibly intrusive law to impose
Any voluntary community can solve any problem they like in a libertarian system. They just cannot impose that solution on people outside their community. Most of these solutions would work just fine – they’d generally be implemented as a trust system, or perhaps a particularly intrusive homeowner’s association or condo board.
FWIW, I abandoned extreme libertarianism in my teen years, but I think the direction from the current world is the correct one, even if some get the magnitude wrong.
@alexsloat:
So can Walmart be prevented from opening up a new location in a municipality (say, by majority vote by village elders or something) because they think it will be harmful to the local economy? Or is that against libertarian rules?
It can be prevented from opening on any given parcel of land by the owners of that land. A town is not a voluntary community unless it has explicitly been set up to be one by a set of rules one agrees to when they move there(e.g., with all the land being owned by a trust that ensures democratic control of various issues, perhaps in the same vein as a condo board). Simply living in the same area as a person who owns something doesn’t give you veto rights over how they use it, unless they’ve agreed to such in advance.
This seems pretty arbitrary. Are we assuming the US government is vouchsafing ownership of property like in the current system? Or could we have a system where it is the council of elders backed by the local militia who vouchsafes ownership of private property? How does your particular flavor of libertarianism justify/model private ownership of land in the first place?
The “council of elders” part was supposed to imply that this is a long-running society with its own mores and traditions and a tight-knit community rather than an atomized US suburb. I should have been more explicit about this, I suppose.
I (left anarchist/libertarian) oppose your form of libertarianism. A system where a single member of a community can defect from the community for a huge paycheck and as a result allows Walmart to move in and disrupt the local economy is not the sort of system I would recommend.
My particular flavour of libertarianism is mostly the status quo with the regulation book trimmed 50%, so not exactly radical. I’m more playing devil’s advocate on this particular point.
I think this is an important difference between left and right thinking. To the left, the atom of society is the group – “the community”, “people of colour”, and the like. Membership is assigned to you, not chosen. To the right, the atom of society is the individual, or at most the small, tight-knit voluntary group like the family. So “harming the community” sounds like a non-concept to the right, while “tyrannizing the individual” sounds like a non-concept to the left. It’s an interesting battle of the blind spots sometimes.
@wysinwygymmv
I’m curious how much restraint you think should be applied to individuals to prevent ‘disruption’ of the local economy. Should I be allowed to start a landscaping company that provides better service than an incumbent, if it would take sales away from that incumbent? Should I be allowed to import cheaper goods from another town, if it takes away sales from local producers? Should I be allowed to invent a labor saving device that allows me to produce widgets more cheaply than incumbents, thus taking away their sales? Should people be forced to buy more expensive and less desirable goods and services if this prevents local disruption?
Competition, innovation, and individual choice are disruptive, but I see them as essential to economic progress.
alexsloat
Under Libertarianism they don’t seem to be able to impose it on people inside their community either, so any solution that requires any number of people other than one to do something is a non-starter.
Why isn’t a state just a particularly large homeowner’s association?
Thanks, interesting take.
I’d like to take the chance to point out, however, that despite my leftish proclivities, I am personally very individualistic, antisocial, misanthropic, and very much not a joiner. I just don’t have any school spirit.
Perhaps to some extent, it is longing for a social context that feels right to me that motivates my leftward leaning. Perhaps there is also an element of the acknowledgement that in terms of ethology, humans are pack animals that really do have a hard time surviving as individuals or nuclear families and that therefore the atomic unit really is a cohesive society.
W/r/t the current example, what I am worried about is individual freedom to, say, own and operate a small business being undermined by the freedom of Walmart to introduce their economy of scale merely by the action of one defector. So in this case, I really am concerned about the rights of the individual, which I think are better served through small-scale local municipal government than by Walmart and the federal government.
> Under Libertarianism they don’t seem to be able to impose it on people inside their community either, so any solution that requires any number of people other than one to do something is a non-starter.
Most conceptions of libertarianism include contract enforcement, as well as a wide range of voluntary groups(the corporation being the most obvious example, but far from the only one). If you get deeper into the libertarian literature, you see a lot of praise for things like 19th century mutual aid societies – you could in past buy things like private legal fee insurance, for example. That’s a voluntary organization of some size(some had thousands of members), where the members all have mutual obligations, albeit limited and well-defined ones.
> Why isn’t a state just a particularly large homeowner’s association?
In practice it is, which is one of many arguments that led to me moderating my views substantially on the topic. However, there’s a substantial difference in the fact that a HOA is opt-in, while states are opt-out(and in practice, actually opting out is sometimes quite difficult).
> Perhaps to some extent, it is longing for a social context that feels right to me that motivates my leftward leaning.
Interesting. I could believe it.
> W/r/t the current example, what I am worried about is individual freedom to, say, own and operate a small business being undermined by the freedom of Walmart to introduce their economy of scale merely by the action of one defector. So in this case, I really am concerned about the rights of the individual, which I think are better served through small-scale local municipal government than by Walmart and the federal government.
Any individual can start their own firm, Walmart or no. Lots of local shops survive near big box stores. You seem to be wanting to give them the right to succeed at it, though, which seems implausible with or without Walmart. Also, “defector” is somewhat loaded verbiage – in general, the term implies someone who makes the community worse off, while Walmart usually succeeds through making the community better off(or else, why would people shop there?).
@random832
One response I find compelling. TL;DR is incentives matter.
I think you might have something there. I mean, I don’t think a HOA is strictly opt-in (if you are born in one, you have to “opt out” of it by moving somewhere else rather than living in your parents’ house until they die and you inherit it), it’s just much easier to opt out of. Which I just realized isn’t really a function of how large the HOA is, but how large the area not covered by HOAs is. If it were impossible to avoid being in any HOA because all the houses and all the land you might build a house on are already covered by one or another, then HOAs would just be another form of government.
I suspect that this is actually inevitable, or would be in a purely libertarian universe with no higher level of government to restrict what HOAs can force their members to do.
“But libertarianism is the last High Modernist political ideology standing.”
“Zero is a number just as much as 1 million is.”
That is in the same class of poor arguments as ‘atheism is a religion’.
@wysinwygynnv
“So can Walmart be prevented from opening up a new location in a municipality (say, by majority vote by village elders or something) because they think it will be harmful to the local economy? Or is that against libertarian rules?”
If the village commune owns all the property Walmart would like to build on and the accepted way of deciding how to use the village’s property is by majority vote of the elders, no problem. Absent that, if the elders can arbitrarily tell John Farmer that he is not allowed to sell his front corn field to Walmart and tell the entire dirt-poor Garcia, Brown, Johnson and Andrews clans that they may neither buy cheap goods from Walmart nor take jobs there, that is not a libertarian village.
You can tell a story where the village elders have the best interests of the community in mind and are correctly judging that allowing Walmart in would break the bonds of community, and that John Farmer is behaving evilly by wanting to defect and sell his field. Or you can tell a story where the elders are a group of old men interested in keeping their sons working for them, their daughters marrying where they are told, and the lower orders in their places–a control threatened when an outside operation gives the aforesaid sons, daughters, and lower orders chances the existing order doesn’t offer them. Which story you choose to tell may well depend on your politics; real cases probably vary.
@random832
How do you define government?
Since you asked …
I guess that you’ve never bought anything made in China, have you?
By the way, your argument about size seems overly general: it is true that large organizations are more prone to corruption and certain kind of inefficiences than small organizations, but they are also able to exploit economies of scale in a way that small organizations can’t.
I’d describe Libertarianism as prohibiting any governmental central authority strong enough to do that from existing. Large companies get to be as authoritarian as the market permits.
To tie this back to Scott’s article: Google makes our society more “legible” – under more surveillance – than just about any other organization.
I’m not sure how you define authoritarian, but as a libertarian I like to choose who I give authority to and have the option to withdraw my consent for that authority. I’ve got that with businesses mostly (including Google), I don’t have that with governments mostly.
@IrishDude:
Go ahead and withdraw your consent from Google.
Now email me the final revision of the PowerPoint for our team’s quarterly strategy meeting. In case you forgot my email, it’s ProjectManager69@gmail.com.
@Synonym Seven
1. I trust Google has better incentives than government to not be malicious. Ex-Google employees spilling the beans on improper use of information is likely to lead to swift changes, to avoid mass exodus from their service. It’s not easy to opt-out of government, leading to poorer incentives to refrain from abuse.
2. If I’m feeling privacy-conscious, I can ask you to provide another email address from a different service I trust better, and send the slides to you that way. Outside email, I could use a range of other 3rd party services to send the slides to you, like dropbox or fedex’d usb drives. Google doesn’t have political dominion over me and can’t make me use their service, even if my friends or co-workers use them for some services.
John C. Wright’s The Golden Age, The Phoenix Exultant and The Golden Transcendence touches on this well. There’s a libertarian government, but a private and voluntary association has enormous clout.
Mass exodus to where exactly? If you live in a democratic country and your government does something that pisses off lots of citizens, citizens can vote for somebody else. If Google pisses off lots of its users they can’t do anything about it since there is no real alternative.
The main incentive that Google has not to do something obviously shitty is that Google exists under a government that can beat it with a big stick if they do something obviously shitty.
Yes, you can ask your boss to use another email address just for you in order to cater to your specific grievances with Google. Sounds like a good strategy to advance your career.
Or a messenger pigeon with a microSD strapped to its leg.
Depends on what Google service you’re using. Since email was brought up, here’s 9 different email providers you can use besides Gmail. Here’s 14 alternative search engines.
Government serves me poorly, I’ve voted, and I’ve never gotten who I wanted (and even the people I’ve voted for have positions faaaaar from my preferred policies). Democracy doesn’t work too well for those in the minority, while the market provides niche products and services for very small numbers of people (one example).
See links above.
Google got big because they served consumers well so people started voluntarily using them instead of competitors. Their PageRank algorithm got more relevant results than any other search engine. They’ll only maintain their market share to the extent they continue to please consumers. Speaking for myself, I’m very pleased with their products (search, Gmail, Drive, Android, and Google Home).
I don’t think I’ve gotten good value from government services, but well, they’ve got guns and make me ‘buy’ their services.
If you feel Google is maliciously using your information, and it’s a big concern to you, you’ve got lots of options that don’t involve you trying to gain citizenship in another country. If it’s important to you and your boss doesn’t understand, get another job that doesn’t require use of Gmail. Inconvenient, sure, but more convenient than emigration.
New business idea!
Well, there seem to be multiple “libertarianisms” floating around out there.
(There’s a relevant old comment of mine I want to link, but I can’t find it right now.)
Some things called libertarianism like local control and are opposed to central authority, yes. But I think the more usual sort of libertarianism is focused on the rights of the individual, not the village, and is opposed to any governmental authority, local authority included.
Because after all it’s often local authority that’s the most directly oppressive, that most directly abuses its power to harass those that the people in charge happen to dislike. In the US we have this crazy two-level structure where the federal government is often what stop sthe local governments from doing this sort of thing!
Of course, I think the proper libertarian reply here is, instead of having two levels of government where one stops the other from acting, why not get rid of both? But just getting rid of the central government, and leaving the petty tyrants in place, hardly seems libertarian — well, assuming we’re talking about individualist libertarianism, anyway, not that other sort.
Personally, I think it’s very confusing to group these together under “libertarianism” when they seem so different.
Clearly a confusing problem due to the failure to properly apply trademark law. Since the context is conversations on this forum, I suggest that the obvious rule is to let whoever here has been describing himself as a libertarian longer own the trademark and permit others to use the term only if they license it from him.
We could apply the same approach to “anarchist,” but before deciding if I’m in favor of that I might want to first do a little research on how old Semiel is.
resettlment → resettlement
twon → town
villsagers → villagers
idiosyncracies → idiosyncrasies
housholds → households
pulic → public
officialsfrom → officials from
notaires → notaries
ofboth → of both
unlesss → unless
simpl eway ofdetermining → simple way of determining
relevantto → relevant to
studes → studies
(liveable → livable)
Scott, you have read Antifragile, right? A lot of your recent posts have a Talebian worldview (“imposing order on complex systems is default harmful; and eagerness to impose order is the greatest failing of modernity”), but you haven’t name-checked him yet.
In recent months this point of view seems to be going viral. Helped enormously, no doubt, by the rise of Donald Trump.
EDIT: I finally got to the end of the article. Zomg! Is this the first time you’ve called out LessWrongian rationalism for being a little bit… kinda, sorta… completely diametrically opposed to philosophies that actually make people more effective in the real world?
For which the best example is their approach to AI. As if proving variations of Lob’s theorem (or whatever) is EVER going to matter to ANYTHING.
No, I haven’t read Antifragile. Maybe I should.
You should. It has a lot of shortcomings… but in the end it is both captivating and profound, IMO.
Also, avoid its prequels, they are less good.
Antifragile is very definitely relevant here, but really at the level of systemic risk. Seeing Like A State seems to argue that Tanzanian farmers will produce more bushels per acre of wheat using their historical methods than will lab-coated science farmers wearing Big Ag patches above their pocket protectors. Maybe true, maybe not; I don’t know enough to know. But Taleb convincingly makes the argument that thinning down the number of strains and propagating them more widely creates a significant systemic risk that, if I remember it correctly, is perhaps low probability but very high magnitude. Fifty per cent of the world’s wheat or rice being the same strain makes our food supply much more vulnerable to disease or pestilence even if those particular strains are bred to resist pests and disease.
Reading Taleb is a bit of a chore. He’s extraordinarily pompous, and I want to punch him about every tenth page. But then he articulates a really good idea every eighth page, so I tolerate him.
What I’ve seen of Taleb is basically “people underestimate tail risks”, over and over again. It’s an important point, but it seems like a whole career on the topic is a bit much. Is there more to him than that?
Yes. His style is basically lots of insight from him and heterodox thinkers wrapped in dramatic prose and anecdotes about how economists are terrible.
Definitely worth it.
The overall theme is ‘people underestimate tail risks’ but also about the difference between designed and evolved systems, fragility, robustness, antifragility, etc.
Antifragile’s point is good- best expressed as a short paragraph with a link to a few examples. Taleb’s writing in the book is pretty execrable, though, and when I listened to it as an audiobook his arrogance and self-aggrandizement was even more obvious and repellent.
The man has a lot of good ideas- I just find his style very offputting.
Reading Taleb is excruciating for me.
He has a bunch of genuinely good insights, but he clearly can’t distinguish those from his unremarkable or simply incorrect ideas. As a result, he can’t reliably build any kind of ‘ascending’ argument because he has no idea what’s important, true, or novel. You pretty much just have to find a good summary or pick through the midden for something of value.
His recent Skin in the Game excerpts have been especially wretched. They’re a bizarre mix of clever ideas, cliches trotted out in seeming ignorance of precedent, vague aspersions against the poor, and incredibly dubious ‘formalizations’. The low point was his claim that he had debunked Piketty’s Capital with a “formal proof” so that he could not be ignored or contradicted. He seemed very proud, but ended up sounding like a crank ‘disproving’ Pythagoras; when he starts claiming mathematical disproofs of observational data I have to conclude he’s a bit cracked.
To be fair, Piketty’s Capital is pretty dumb.
formid0’s comment does not address the argument in the book, nor the datasets collected, nor the methodology by which the data was collected or crunched. It is a pure ad homonym against a book that had (some) very well researched sections.
If he’d said “Piketty is dumb, and therefore his book is wrong”, that’d be argumentum ad hominem, the logical fallacy of attacking the man instead of the argument. (That said, as most good rationalists have probably noticed, “logical fallacy” is roughly synonymous with “Bayesian evidence”) “Piketty’s book is dumb” is unsubstantiated, but not fallacious – it addresses the subject at hand directly. That said, he certainly gave no actual argument, merely an unsubstantiated attack.
If you want an actual argument against Piketty’s claims, https://www.samizdata.net/2014/05/piketty-and-the-shoe-event-horizon/ is the best combinatiuon of brevity and actual anti-Piketty analysis I’ve come across.
I believe it’s time to link the old guide to just the ad hominem fallacy is and is not… 🙂
My complaint is basically that Taleb is still being shady no matter how dumb you find Capital.
If I say the sky is red, and you offer a “mathematical proof from first principles” that it’s blue, that’s still dubious no matter how right your conclusion is. Rebutting data with theory can mean there’s an interesting misunderstanding underway, but you still need to challenge the actual data to consider the claim disproven.
It’s always much better using observation of reality to disprove an argument based purely on logic, like Diogenes the cynic refuting Zeno’s paradoxes on the impossibility of motion by simply getting up and walking across the room. Samuel Johnson must be a contender for finest example of this line of argumentation:
‘After we came out of the church, we stood talking for some time together of Bishop Berkeley’s ingenious sophistry to prove the nonexistence of matter, and that every thing in the universe is merely ideal. I observed, that though we are satisfied his doctrine is not true, it is impossible to refute it. I never shall forget the alacrity with which Johnson answered, striking his foot with mighty force against a large stone, till he rebounded from it — “I refute it thus.”‘
I have a problem with Antifragile— I think the idea of systems which grow stronger when challenged is an important idea, but Taleb (so far as I know) doesn’t talk about systems being anti-fragile within a limited range, so anti-fragility sounds like a fantasy of unlimited strength.
I also thought of Taleb while reading this review. I think Taleb’s Incerto cycle provides “what’s left of [J Scott’s] case” once the Green Revolution and other modernist improvements are taken into consideration. On the global scale, an increase in systemic fragility and tail risk. And on the individual level, the disappearance of opportunities for the development and exercise of virtuosity and local wisdom, which may not contribute to effectiveness toward a particular prescribed goal, but seem to broadly increase human life satisfaction across domains.
As Scott A. mentions, much of the difference in efficacy between the local view and the State view seems to stem from the imposition of a fragile foreign whole system versus the gradual evolution of robust local solutions. Taleb would see this as a failure of epistemic humility. The way metis is used here seems compatible with the type of local knowledge that contributes to markets and systems like futarchy, but on a smaller, more transparent scale.
I’m attracted to the idea that “rationality” is best applied to meta-level problems like X-risk, while the evolutionary model of local mutation, experimentation and selection, and incremental change over time is a sufficient model for other issues, possibly because of my own insufficiently-interrogated idiosyncratic preferences. On these “blue ocean” problems, applied rationality, with its explicit use of conditional probabilistic modelling, is effectively becoming a theoretical arm of the incremental probabilistic process that plays out over the long term to create traditional knowledge. This, plus rationality’s usefulness in discovering and explicating interesting solutions from cultural evolution, is a large and worthwhile project.
Brad DeLong wrote a good review of this some time ago, comparing Scott to Hayek, of all people: http://delong.typepad.com/sdj/2007/10/james-scott-and.html
Distinguishing worthwhile local knowledge from stuff that can be happily done away with is difficult, and I think it is easy (but wrong) to take the leap from “technocrats are bad at taking advantage of local information” to general anti-intellectualism, which is how I’ve seen Scott’s argument taken before*. One of the hallmarks of high modernism, as far as I call tell, was ludicrous overconfidence on the part of purported experts about the extent of their knowledge and the difficulty of the problems they faced (on the other hand, the green revolution happened, so clearly someone somewhere figured something out). That, I think, is a product of ideology, rather than an inevitable result of technocratic approaches to governance.
*to be clear, by people who weren’t James Scott.
Yeah, Hayek was my first thought reading this review as well. This is straight out of The Use of Knowledge in Society.
As for the Green Revolution, they attacked the biology side of the problem, not the economic or agricultural side. Create higher-yielding strains of crops, better fertilizers, and the like, and they can be added into any agricultural process without displacing many(or sometimes any) of the useful bits of tradition. Better tools will always be easier to integrate into people’s workflows than better processes.
I keep meaning to read this book, but haven’t gotten around to it yet. I might have to push it up on the pile.
One thing I did want to note is that the census worker “murder” was probably a suicide staged to look like a murder for insurance purposes.
Great article. (Another similar story criticizing these concepts is Michael Ende’s Momo, which anyone who hasn’t read should drop everything and read). Thoughts:
a) from personal anecdote, I’m torn on unplanned cities – on the one hand, I really love Jerusalem, in large part because it is ridiculously unplanned (or rather arbitrarily planned by building streets away from wherever the border was at the time for the last century). On the other hand, I have a friend who lived in Dallas and says that, due to its lack of zoning regulations, it’s the most ridiculously terrible city in America, since it has no downtown or anything, and everything’s full of the giant highways you need to get from anywhere to anywhere.
b) The inside view on occupational licensing: A friend of mine recently told me her sister was apprenticing for her hairdressing license (in Germany, where apparently that’s a thing). From her view it wasn’t a regulatory burden, just the best way to get into the field. Scott’s mentioned how non-agentic most people are; for most people, if you want to become a hairdresser you don’t self-study hairdressing and buy a streetcorner building and advertise and make your shop, because that’s really really hard. It’s much easier to find someone willing to apprentice you and go through standard channels. (She also mentioned that a hairdressers’s job is putting toxic chemicals on people’s head, so that might not be the best example for pointless licensing).
(My personal example for that is programming: There’s a tendency among self-taught programmers to bash people who learned it in college, but for me, I was never one of the cool hacker kids in school. I needed college classes, because I needed a formal environment to learn this stuff. And the fact that college also guides you to non-major education is the main reason I managed to get into it).
c) I’ll take a step to defend (modern) scientists, who get bashed a bit too much, and on the whole seem to have the right mix of expertise and humility. Most of the agriculture/forestry/geology scientists I know spend time in the field, or grew up in a farming community, and know the inside-view picture because they’ve been inside. They’re not outsiders coming in and teaching the Tanzanians, they’re insiders who put their insider knowledge together with studying issues. (note: This is for relatively hard sciences. I don’t know many social scientists).
d) This being the past also explains the whole “your ‘brilliant solution’ is foolish, evil supervillain!” meme. I generally hate that meme, because experts generally aren’t clueless idiots who don’t know anything about their fields, but apparently there used to be a wide class of politically powerful people LARPing as experts, which makes me feel a heckuva lot better about it.
e) Did anyone else read this as Scott talking about himself in the third person?
There’s a difference between being able to take a structured course to learn your discipline and being required by law to do so. Hairdresser licenses are unfortunately a thing in the Czech Rep as well. Not only are you required to do a course, you also have to spend the first one or two years as somebody’s apprentice before you’re allowed to open your own salon. I often give this as an example of senseless regulation whose main effect is less competition in the hairdressers’ business. Even if they do use bleach, the worst thing that can happen is that your hair gets ruined and your scalp irritated for a few days. Such hairdressers then quickly lose their customers. Also, the hairdresser I visit has a lot of various certificates on the wall in her salon, some of which are not at all mandatory. If I wanted my hair bleached and were unsure about her bleaching skills I could check whether she has a certificate in that and proceed at my own informed risk of she doesn’t.
To clarify: On the whole I still lean towards removing most licensing requirements (especially in America, where the bureaucracy is unusually terrible), but I think there’s a nontrivial case to be made for them.
More specifically, you mention that the main effect of this regulation is less competition for hairdressers, but the effects of this aren’t entirely bad: It gives a clear, simple path for non-agentic people who want to become hairdressers, and helps protect and stabilize their job structure, which may be really convenient for the average (aspiring) hairdresser.
(Also, to clarify: I agree with you about the bleaching, I meant that as it’s a job where the pro-licensing position has an easy scare tactic, it might not be the best example to focus on when demonstrating the anti-licensing case.)
I think there still are, and always will be, such people. The problem is that it’s hard to distinguish these people from the inside-view scientists who know what they’re talking about and how to shut up when they don’t, especially because the LARPers tend to be loud and prominent and write books advocating things, while the scientists are just kind of doing their jobs.
Yeah. this is essentially the problem the EU has had for the last decade
used to be?
Are you possibly thinking of Houston? Dallas has zoning, a pretty well defined downtown and decent surface street connections.
I, um, actually think you’re right. *wipes egg off face*.
On the subject, are there any (other?) good examples of cities that are both modern/rich and unplanned/unregulated? seems interesting for comparison on which improvements are due to modernity and which are due to planning.
Houston is not that bad. Yes, it’s sprawly, not particularly pretty, and you almost certainly need a car. But the lack of zoning means that there’s plenty of stuff nearby so you don’t have to spend too much time on the giant highways or plodding through suburbia, and cheap real estate makes it easier to live close to work. It’s not the best place I’ve lived, but I’d take it in a heartbeat over San Francisco.
Another thing to note is that while Houston does not have Euclidean use-based zoning, it still has minimum parking requirements, minimum setbacks, and in many cases minimum lot sizes. Its downtown also underwent some of the worst mid-century urban renewal resulting in a massive parking crater, and the Texas DOT seems to have some kind of freeway fetish that the state government is happy to fund without limit.
Basically, Houston has all the urban land market distortions that promote formless car-oriented sprawl, with none of the haphazard restrictions people have put in place elsewhere to limit it. The upside is that those restrictions tend to catch all other development as collateral damage, so Houston has avoided the housing market inflation seen in other growing metro areas. The downside is that the development being incentivized is still mostly terrible.
@shakeddown:
I can’t speak to Germany, but here in the US the reason hairdressers became the quintessential example of pointless licensing was precisely that the licensing rules built around that conception of the job were applied to people who didn’t want to put toxic chemicals on people’s heads. For instance, African hair-braiding. Suppose you’ve been creatively hand-braiding your own hair and that of your family members for years and you’re really good at it and people want to pay you money to do theirs too. The State comes along and requires you to get a cosmetology license for which you need to spend a thousand hours to pass tests on bleaching and dyeing and straightening hair – all of these being services that your customers explicitly don’t want from you.
I think it’s interesting to read this alongside Uncontrolled, whose theme is basically that the early days of science were a special case and we shouldn’t keep expecting to be able to do a couple of isolated experiments and get a perfect simple formula that generalizes to every setting. It worked for the hard sciences, it just barely worked for medicine, but as soon as human minds get involved everything becomes an insanely tangled web of millions of interacting variables and the best we can hope far is widespread, repeated experimentation to guide some sort of evolutionary process.
So scientific experiments were invaluable for guiding the evolution of Western farming techniques, but declaring it to be THE SOLUTION and throwing it wholesale at a different culture, climate, ecosystem and set of crops was pretty much guaranteed to fail.
It calls for some sort of epistemic humility about the complexity of human systems that demands a very low prior belief in the value of any new policy, because the vast majority of changes are bad and even the good ones end up being implemented badly. Which is pretty much to opposite view of High Modernism, which holds that THE SOLUTION to building a perfect society is already pretty much nailed down and we just need to get the idiot peasants out of the way so we can implement it already.
The rationalist and EA communities at least talk about epistemic humility, so that seems like a good start.
The call has been answered and summed up
here.
This extensively critiques Agriculture By Outside Science, but the first thing it makes me think of is the Green Revolution, and the impression I have of the GR was that it was 1. exactly that, and 2. amazingly successful.
I notice I am confused. What am I missing?
Short version:
From the turn of the century to WWII, western agriculture increased yields ~10x. Then fairly soon after WWII, we proceeded to ‘export’ these technologies to poorer countries and made a royal fuckup of things. Around 1960-ish, we started to adapt to local conditions and this is the green revolution.
Some examples:
Western plowing techniques and artificial fertiliser are fantastically suited to temperate to cold-ish climates where rainfall is spread more or less evenly through the year, but causes horrible soil erosion and degrading of soil quality in warm climates with seasonal rain. First try went rather badly and possibly caused some of the later famines. Second try where technology were developed to fit local conditions went rather well.
Same with pesticides; works nice to get rid of pests in England, not so well when what you do is removing competition so that you are essentially breeding locusts. Sturdier breeds of plants work well in Zimbabwe with less pesticides though can’t grow in England.
Even telephone lines; when I did the engineers without bordes thing, we were told of an aid project in the 1950s that set up telephone poles to get communication to some villages. They were operational for less than a month when the elephant migration passed and the bulls picked up every single pole. Telephone lines are now underground where elephants pass.
Besides what Fossegrimen says:
1. Industrial agriculture doesn’t restore fertility to the soil in the long-term, so it entails drawing down on thousands of years of nutrients stored in the soil.
2. Since industrial agriculture doesn’t restore fertility to the soil, nutrients must be derived exogenously through fertilizers. Synthetic fertilizers are widely used, and they are derived from fossil fuels. In some sense, human food calories are derived from fossil fuel energy. This makes industrial agriculture unsustainable, and entangled agriculture with energy issues.
So they were successful in the short term by using “brute force”, using fossil fuel energy to drive up per-person (but not per-acre) yields by leveraging the advantages of fossil fuels, but whether it’s a long-term success (on scales greater than a single human lifespan) remains to be seen.
While the energy source used to produce fertilizer is generally fossil fuels, because most energy sources are, it isn’t a requirement. The most common fertilizer chemical is ammonium nitrate, made from nitric acid(which comes from air at very high temperatures plus water) plus ammonia(made from atmospheric nitrogen plus hydrogen, the latter of which is currently sourced from hydrocarbons but could easily be created by hydrolysis if non-fossil-fuel energy ever got cheap enough to compete).
Well, yeah. But that’s a BIIIIIIIIIIIG if.
For the record, I am deeply skeptical that fossil fuel use can ever be substantially replaced by anything else, including nuclear. To me, the human population graph looks like the first half of the population curve for any animal that suddenly finds a new source of chemical energy (for most animals, that just means food): the exponential growth before the inevitable population crash. I think the best we can do is to try to make the crash less horrific.
(So yeah, not a techno optimist.)
I tend to generally be optimistic, because human societies contain enough smart people that we can generally find alternatives to shortages. Energy will likely get more expensive, but sufficient economic growth will make that survivable.
So far, every large-scale human society except the current one has ultimately succumbed to a shortage of something-or-other. Sometimes after a few decades, sometimes after a few centuries, but so far it has happened every time. Empirically, I don’t see how this claim could possibly be true.
My view of human society is largely driven by thermodynamics, so I have a rather different view. Economic growth and growth in energy use are almost the same thing (a small fraction of economic growth comes from greater efficiency, but efficiency curves are always subject to diminishing returns). The idea that the economy could grow and energy could become more expensive at the same time is, from my understanding, a contradiction in terms.
Notice how every time the price of energy spikes, there is a recession. Though I found a rebuttal to this idea while searching for that image, so I’m going to read that soon.
I do wonder how much of the current cost of nuclear is on the regulatory side. Fukushima was mid-seventies technology as I recall it; that’s now forty years ago. Is it even plausible to think that we could build relatively cheap but very safe nuclear plants?
@howardtreesong:
Maybe if the so-far unproven Thorium technologies are even 1/2 as good as the fanboys insist.
But you have to take into account the whole-systems view:
1. Energy is required to mine nuclear fuel. So far, all the equipment is tooled to use diesel fuel, not electricity. So a huge hurdle is electrifying mining equipment, or taking the efficiency hit of using good nuclear-produced electricity to produce diesel fuel to enable mining.
2. There’s no actually that much usable uranium. A few hundred years at most at our current rate of energy consumption (which might be note enough, see point (1)).
3. For Uranium fission at least, the waste problem has not been solved. Spent fuel ponds could easily cause a huge ecological crisis if there is a big enough catastrophe to disrupt their cooling systems.
4. Due to risks of catastrophic failure and grievous risks to the health of workers, some amount of regulation and bet-hedging will always be required with nuclear, at least until human morality and character becomes perfected (don’t hold your breath).
It seems like virtually all past societies have succumbed to a shortage of military force, or occasionally a shortage of humans(the Columbian plague wiping out the Americas) or royal heirs(various smaller states being subsumed into larger ones, Burgundy being the first that comes to mind). Societies or states falling due to economic shortages are incredibly rare – the Mayans, and arguably the Soviets, are the only ones that come to mind.
I do agree that expensive energy will make us worse off, and substantially so. But it’s not an automatic death-knell for society.
As for nuclear, diesel fuel for uranium mining equipment is not a real problem. We will never literally be out of oil, it will just stop being economical for automotive and electrical use. Specialist use(aviation, chemical plants, certain specialized equipment like mining gear) is a trivial percentage of the global usage and we could sustain that for millennia. And safety concerns for nuclear are grossly overblown – it is literally safer than solar or wind.
@wysinwygymmv says:
the same is true of the silicon in solar panels or the aluminium in windmills.
the idea that “a few hundred years” is not a lot of time is patently silly.
There are plenty of solutions. Nuclear recycling to reduce the quantity, dropping it in ocean subduction zones encased in concrete, packaging it so as not to need water cooling then simply warehousing it.
While true, that says absolutely nothing about quality or necessity of the current regulatory regime.
Distinction without a difference in my view. What prevented Rome from raising bigger armies? Probably the economic devastation resulting from eating all its seed corn on the peninsula and relying on tribute for hundreds of years. Why is there ever a shortage of humans? Always, always because of a food shortage 20 years earlier. Royal heirs? Dead easy to find — when the economy’s good. You can replace the king with a goat in a dunce hat when the economy’s good without any risk of peasant insurrection. Absence of royal heirs only becomes a problem when there’s enough unrest that factions form and non-heirs try to gain an upper hand. (Again, my view of human societies is through the lens of thermodynamics. All these parameters are related to flows of energy through society. Since humans ultimately rely on energy to live, it always comes back to food.)
Agree, but this just becomes an “efficiency tax” if you will. You have to use some of your energy to produce diesel fuel instead of doing all the neat stuff you’d like to do to grow the economy. The harder it gets to produce diesel fuel, the bigger this “efficiency tax” becomes. It’s not like “we have a trick for producing diesel fuel, so we just use that”. There a cost to inefficiently producing diesel fuel, and it’s naive not to expect this cost to be reflected somehow or other in some aspect of society.
This is, in part, due to the regulations that make it so expensive. It could be cheaper, but then it would likely be less safe. Since you’re trading off one against the other, you need to keep both in mind — you can’t just say “safety’s solved, price is solved, let’s go home” because trying to solve the price problem exacerbates the safety problem.
Also, the biggest risks of nuclear are catastrophic modes of failure that we have (fortunately) not encountered much of so far. The question is whether we can continue to avoid such catastrophes if energy becomes more expensive and we start gutting regulations and cutting corners everywhere throughout the economy. If we’re spending part of our “fossil fuel surplus” preventing nuclear disasters, and our “fossil fuel surplus” goes to zero, it’s not clear whether nuclear is going to save us or be the last nail in the coffin.
@cassander:
Yes, obviously. Hence my pessimism.
You are obviously silly if you can’t credit the notion that “a lot of time” is relative to the timescales under discussion. Also, note that “a few hundred years” is an unrealistic theoretical ideal. If you were trying to understand instead of poking holes you might have realized this yourself.
Which of these solutions is currently being implemented at an industrial scale? None?
Then I repeat: the waste problem is not solved.
You seem to be looking for things to argue about. I don’t think I said anything about the quality or necessity of the current regulatory regime.
Could you please either keep your responses pertinent to my comments or find someone else to argue with?
> What prevented Rome from raising bigger armies? Probably the economic devastation resulting from eating all its seed corn on the peninsula and relying on tribute for hundreds of years.
I don’t buy it. Yes, in the city of Rome proper that was sort of a problem, but most of the population of Italy was farmers, just like they’d always been. From what I’ve read, it looks like the fall of Rome had more to do with civil wars destroying the effectiveness of the army at keeping invaders out(both due to losses and due to prophylactic measures against generals trying to crown themselves), and the growth of effective military technology among the neioghbouring barbarians. Food shortages weren’t nearly their biggest problem.
> Why is there ever a shortage of humans? Always, always because of a food shortage 20 years earlier.
Usually because of plagues, actually. Look at the invasion of the Americas – the natives resisted the Europeans quite well, until they were wiped out by the Eurasian disease pool. Africa fell 300+ years after South America, despite broadly similar climates and better communications, because the Africans already had smallpox and typhus.
> Absence of royal heirs only becomes a problem when there’s enough unrest that factions form and non-heirs try to gain an upper hand.
That one normally leads to the end of a state, not a society. I’m unaware of any crown of any active state, no matter how troubled, that has ever gone totally unclaimed. But many of the claimants are large nearby realms who simply incorporate the state in question, leading to the end of the state.
> Agree, but this just becomes an “efficiency tax” if you will.
Agreed. Cheap oil is far better for the economy than expensive oil, no question.
> This is, in part, due to the regulations that make it so expensive.
In part, but I think there’s a pretty good sweet spot there. You can quintuple the death rate from nuclear and still not be any worse than solar. (The PR of this plan would be just a bit problematic, to be fair)
wysinwygymmv says:
March 16, 2017 at 2:37 pm ~new~
@cassander:
Given the changes in technology and human society that have occurred over the last few hundred years, yes, to proclaim any solution as useless because isn’t permanent and idiot proof for all of time is patently silly . A few hundred years to work out much more efficient solar panels, how to get uranium out of seawater, to work out thorium reactors, to finally get to fusion, whatever you want, is a huge amount of time.
>Which of these solutions is currently being implemented at an industrial scale? None?
I did not claim the problem was already solved, I said there are solutions. We don’t go to more intensive methods for dealing with fuel now because there’s little need for it. That would change if we produced massively more nuclear power, and thus nuclear waste, but you can’t say this solution doesn’t work because its answer to a problem it would create, but that doesn’t exist yet, hasn’t already been implemented.
>You seem to be looking for things to argue about. I don’t think I said anything about the quality or necessity of the current regulatory regime.
then you were beating up a strawman by posing “no regulation” to as an alternative to the current regulatory regime and dismissing it. Your choice.
@alexsloat:
Don’t know what to say. I don’t find your response convincing, but this is not really a good venue for rebutting it.
Maybe just food for thought, but have you considered the role of “bread” from “bread and circus” in the civil wars you mention? Bread is a kind of food, and a welfare scheme involving giving out bread seems relevant to the question of whether foot shortages factored into the fall of Rome.
1. Plagues are often caused or exacerbated by immune distress caused by food shortages.
2. The black death caused an economic boom in medieval Europe, so the rule cannot be “people shortage” –> “social collapse”. There must be some other more complicated rule. I propose that this rule has to do with the flow of energy through society.
You were the one who cited lack of heirs as a potential cause for social collapse. I rebutted to say that lack of heirs is only a problem when societies are already collapsing. Your response does not seem relevant to my rebuttal to me.
@cassander:
“As long as we believe really strongly that a solution is inevitable, then a solution must be inevitable!”
Also, I mentioned that “a few hundred years” is an unrealistic theoretical ideal. If you were engaging in good faith instead of trying to poke holes, you might have acknowledged that.
“As long as we believe really strongly that a solution is inevitable, then a solution must be inevitable!”
Actually, it’s your choice! There are, of course, dozens of other possible interpretations, but you certainly don’t seem like you have the imagination to come up with any of them on your own. Maybe trying will be a useful exercise for you, though.
If it didn’t succumb to anything, it would be part of the current one.
(And succumbing to pretty much anything can be interpreted as “succumbing to a shortage”).
> have you considered the role of “bread” from “bread and circus” in the civil wars you mention? Bread is a kind of food, and a welfare scheme involving giving out bread seems relevant to the question of whether foot shortages factored into the fall of Rome.
Thing is, the civil wars were so far as I’m aware almost always internal to the military – the civilian population never exerted all that much political power in the imperial era. Food shortages led to none of them that I’m aware of, attempts to wrest supreme power from a weak Emperor led to all of them.
> 1. Plagues are often caused or exacerbated by immune distress caused by food shortages.
Often, not always. I specifically meant that mostly as a reference to the Americas, where food shortages played effectively no role.
> 2. The black death caused an economic boom in medieval Europe, so the rule cannot be “people shortage” –> “social collapse”. There must be some other more complicated rule. I propose that this rule has to do with the flow of energy through society.
If it hits everyone equally(as the Black Death did), then the end of the ugly tail of the diminishing returns curve helps the economy out. If it only affects you, and not your enemies(as the Columbian plague did), you generally get conquered in fairly short order.
> You were the one who cited lack of heirs as a potential cause for social collapse. I rebutted to say that lack of heirs is only a problem when societies are already collapsing. Your response does not seem relevant to my rebuttal to me.
My response was intended as a clarification of the original claim.
@wysinwygymmv
Please point to where I said anything was inevitable. I said it buys time. It it turns out there is some problem that can’t be solved, you try something else.
>Also, I mentioned that “a few hundred years” is an unrealistic theoretical ideal. If you were engaging in good faith instead of trying to poke holes, you might have acknowledged that.
>
Again,you put words in my mouth.
Pot, this is kettle, you’re black.
…huh. This seems to have been rewritten since I logged in and a bunch of what I wanted to respond to seems to have been removed.
But yeah, I think a big part of how to not fall into this trap is that combination of empiricism (i.e., actually looking) and not relying on getting things right the first time. Basically, having a system that allows problems, even totally unanticipated problems, to be corrected (or actually causes them to be corrected) in a subsequent iteration. Closing the control loop, so to speak.
Note importantly that tradition can break free of this just as much as attempts to work things out from first principles! Without negative feedback, tradition can get stupid fast. Basically I think the tradition vs reason distinction may be less important than the open-loop vs closed-loop distinction.
EDIT: And now it’s been rewritten again, to put the empiricism stuff back in. Whoa…
Agreed.
I occasionally hear New Math cited as a relatively recent centrally planned grand catastrophe. People criticize it as if the core idea were obviously wrong, but to me the idiocy was rolling out an experiment nationally all at once. Opening a few New Math schools would have been reasonably.
I agree, but it also seems important to specify just how much empiricism one needs.
There’s a related machine learning concept called “overfitting”, in which one optimizes for the specific, random or biased idiosyncracies of the data currently available – and creates a model which relies too heavily on those to be of much use on other data which is ostensibly drawn from the same distribution. Less Wrong warns against the perils of overoptimized search, giving what seems like a similar moral.
The sort of empiricism I would guess one ought to employ is gradual testing on larger and more diverse scales, trading away cheapness and speed for large sample sizes as one becomes more and more certain. There are already principles like this in software engineering, where a well-crafted system goes through many tests of varying levels before it’s considered good enough to ship, from unit tests which verify the correctness of code over a handful of cases to canarying (with new servers) which live-tests the new software to serve a fraction of customers and monitors the results.
To create a great crop for Tanzania, one would first try to breed a strain that grows well on the experimental farm, then a strain that grows well on ten experimental farms scattered throughout the country, then a strain that grows well and preserves important soil quality indicators in a variety of climates, then a strain that gets good
userfarmer feedback across a couple villages which are willing to try it out, etc.This, I think, is the real moral: check your work often. Reason is your guide, but reality is your judge.
Never? Or is it just harder than it sounds.
Building houses used to involve all sorts of construction metis, but everything I hear from the field now is that architects really do right plans in so much detail that it’s unskilled labor.
Giant checklists have a good record in medicine.
The ultimate test of intuitive understanding was said to be the game of Go, but it succumbed to a list of rules. Granted, the list was terabytes long and required a giant GPU cluster to evaluate, but it was still a list of rules.
Leaving aside actual machine learning, there seem to be two keys to writing a good rule list:
1) Work hard at it. Hundreds of person-years by real domain experts.
2) Test and iterate. The first draft *will* be buggy. Test it at a small enough scale and then make it better.
If you replace these two steps with buzzwords and fancy suits, of course the result will be disaster. But if you *do* these things, it seems to mostly work.
Regarding Alpha Go – I would argue that the way it works is more like intuition than a list of rules. In the sense that human intuition is a calculation your brain does based on a vast number on inputs related in a multitude of complex ways which are if not impossible then very difficult to write down explicitly.
Similarly, Alpha Go works by taking in the state of the board, and using its neural network trained on data from real games by professional players and by playing the game against itself, spits out what it thinks is the best move. However neither the system itself nor any of the programmers could provide any real rule-like explanation for why it chose that particular move.
Another good example of how it’s not useful to think of computers playing board games using “lists of rules”: at the dawn of computer chess, there were (to simplify things) two opposing schools of thought. One school believed that good chess programs would be created by consulting with experts to come up with an extensive list of rules and principles, and applying those. The other just wanted to throw computing power at the problem. Instead of spending a lot of processing power prioritizing moves, you could just look at every move, and every response to those moves, and every response to those, and so on, until you ran out of time. This forces you to have a very cheap evaluation function (How many pieces does player have? Are they on good squares?), but dramatically increases your search space. As you may be able to tell from the length of my descriptions, the second approach won decisively.
AlphaGo incorporates a number of more recent techniques — neural networks trained on large corpuses of data, reinforcement learning, weighted random sampling of the search tree (Monte Carlo tree search) — but is fundamentally no more based on a list of rules than, say, Deep Blue.
>Giant checklists have a good record in medicine.
They do in aviation one as well. And, as far as I know, the process for writing them was never the FAA working out a grand scheme of perfect checklists. Rather, every time a plane crashes, they figure out who did what wrong then add that to the list.
Checklists have barely been tried in medicine. Airplanes have a giant book of short checklists. When something goes wrong, you look up the appropriate short checklist. Of course, in a crisis a giant checklist would be ridiculous.
Giant checklists don’t seem to be a very good idea, but there are several kinds of checklists. For a retrospective checklist of the form “Did you remember to do X?” the limit is something like 10 items. For prospective lists of “Do X now,” like for preflight maintenance, longer lists are usable, but it’s not clear that they should be called checklists, rather than routines.
All I know about checklists is what I’ve read in Gawande’s Checklist Manifesto, and a lot of the point is that it wasn’t High Modernist top-down imposition and it wasn’t a giant checklist.
Instead, it was a collaborative project to develop the checklist, and one of the hard things about designing a checklist that worked was keeping it short enough to be usable by people who were also doing an operation.
I took a different message about the connection between Yudkowsky rationalists and LeCorbusier rationalists. The biggest message I’ve taken from Yudkowsky’s writings has always been “optimizing for any simplified measure of desirability short of the full, complex thing may lead to disastrous results.” The community usually shortens this to the word “paperclips.” That seems like the opposite of what LeCorbusier was saying. (Although I do suspect that provably safe systems like he is working on right now will be a disappointment, because anything provable will be insufficiently complex, and anything good will be insufficiently provable.)
In making the connection between the Yudkowsky rationalists and the LeCorbusier rationalists it sure feels like (our) Scott is slowly stumbling towards a, forgive me, Dark Rationalism …
In the U.S., aren’t all successful farms nowadays enormous entities, taking advantage of economies of scale? I know for sure that they are >95% automated compared to the past because most people aren’t farmers anymore. Yields per hour of work might be a better measure than yields per acre.
But how path-dependent was the process that yielded the result? Were smallholders not competitive for purely economic reasons, or other factors like government policies and demographics make a difference?
Also, if unemployment and underemployment are severe problems, and people are worried about automation and technological unemployment, why would we want to maximize yield per hour of work? It seems to me like we have a glut of hours of work that might be put to profitable use in agriculture — especially since labor-intensive practices are often more land-efficient.
they are, but we didn’t get there by having a president decide in 1928 “alright, starting next week we’re getting rid of all the family farms and sending all the horses to the glue factory.” And lest you think I’m creating a straw man, the soviets, almost literally, did that. There can be advantages in things like economies of scale and specialization, but they’re almost always discovered by experimentation over time, not grand plans.
I feel chastened by this post. I see a lot of my own worldview reflected in the high modernist ‘s attitudes and failings, and I felt genuinely triggered while reading peoples reasoning for settling in Brasilia(the streets aren’t fucking crowded enough for you?! And they solve that by moving to less-dense suburbs?).
I suppose this is proof that a person like me should never ever be in charge of anything.
High modernism is a really natural failure mode for smart people. We know we’re smart, we know that we know things, and we get cocky. Humility is a tough lesson, but an utterly necessary one.
As for the density of Brasilia, “density” is a word with more meanings than it might seem at first glance. I grew up in a variety of middling-sized towns, and recently moved to Toronto(~6 million metro area). On paper, it’s by far the most dense place I’ve ever lived, but in practice it feels like the least dense. It took me a while to figure out why it made me so uneasy, but eventually I realized that in my day-to-day life, the raw number of people around me matters very little. What matters is the distance between me and where I want to be. In a small town, everything is close, so it’s dense for practical purposes. In a mega-city, everything is on the other side of the city, ten miles of hideous traffic away. Despite living in the same city, I’m actually further away from the downtown core in terms of travel time than I was when I lived in a city fifty miles away(mostly because back then I made a point of avoiding the city during rush hour, but still).
No, it’s proof that a person like you should never be in charge of everything.
Soviets had enough grain to prevent the Holodomor &c., but letting rural landowners starve to death was convenient for the state. Same with China’s famine: China had reserves and was even exporting grain. Does the author not even mention this? Even when it comes to raw grain output I bet it went up several hundred percent under Stalin.
Edit: nvm I’m bad at reading, missed relevant paragraph. Still though.
Doesn’t look like it. And given that the USSR eventually had to resort to importing grain on such a massive scale it caused serious balance of payment issues, it definitely didn’t grow fast enough.
Thanks for link! afaik USSR didn’t import grain until the 80s and supposedly that was partially because meat consumption went up starting in the 70s and meat takes about 10x the grain; raw production actually went up substantially during that time despite it coinciding with the economic downfall of the USSR. The only really serious famine was the Holodomor, a six month period, for the 50 years after it the Soviets had more or less sufficient grain output. Seems I was wrong to predict several hundred percent growth under Stalin though, thanks for the correction.
Looks like they started a lot earlier, but they didn’t get to be large until the 80s.
I grew up in the 1960s idolizing Jane Jacobs.
At the time, the Urban Renewal program was in full swing. Practically every city in America was using federal funds to demolish historic downtowns and old/poor neighborhoods, and replace them with office complexes and expressways. Everywhere, beautiful places were scorned, destroyed, and replaced with concrete ugliness, and I was very distressed about it.
Jane Jacobs (e.g., in her book The Death and Life of Great American Cities) was one of first to oppose this.
In Lansing, Michigan, near where I lived, Urban Renewal leveled the north half of downtown in the mid-1960s; it remained vacant land and parking lots for more than ten years, and eventually became a sterile office park. Without critical mass, Lansing’s downtown retailing quickly faded away.
Meanwhile, in Ann Arbor, Urban Renewal was opposed and defeated, mostly by conservative Republicans. The originally targeted area of “blight” is now the Kerrytown neighborhood, with old industrial buildings and houses converted to art galleries and restaurants. Indeed, Ann Arbor is now the only sizeable city in Michigan with an economically vibrant downtown. This is exactly what Jane Jacobs would have predicted.
I still think of Urban Renewal and Le Corbusier Modernism as totalitarian disasters (just as Ceaușescu did the same in Bucharest, destroying the Old City, and the South African white supremacists did the same in Capetown, forcibly relocating its racially mixed population to remote “homelands”). The architectural profession, decades later, remains in the grip of very rigid 1940s High Modernism. To be respected in the field, you are still required to believe that designing any other way is actually immoral.
But that being said, I am far more technocratic now in my 60s than I was in my 20s. Family farms and rural farming villages are wonderfully picturesque, but scientific agriculture has given us the most plentiful food supply ever seen. Worldwide, starvation is practically a thing of the past. Building codes have made modern cities much safer from fires and earthquakes. Everybody scorns mealy-mouthed diplomats and the United Nations, but their pragmatic work has helped give us the most peaceful half-century in all of human history.
If you look at data maps of Europe from the 16th through 20th centuries, showing the ending of feudalism, the spread of literacy, the decline in poverty and disease and infant mortality, all the benefits of modernization, you’ll see that France (the whole country) stands out as the place where it all happened first.
France is where a revolution, at least purportedly based on human rights, shattered the power of monarchy and religious institutions, and invented the metric system. The French Republic was so thoroughgoing in its drive to rationalize everything that they instituted a new calendar with weeks of ten days, days of ten hours, and hours of 100 minutes. It’s all quaint and pretty now in retrospect, but it was deadly serious at the time.
And, all taken together, it worked: it helped propel the world into the place we now recognize, where we can spend time writing blog comments rather than tending fires, weaving our own clothes, and dying of malaria.
I think the Industrial Revolution had more to do with “propel[ling] the world into the place we now recognise”, and was already started by the time of the French Revolution. As for the French Revolution itself, it resulted in thousands of Frenchmen being murdered, and the political system it created was replaced by a military dictator who broke the back of French power in a series of wars lasting for a quarter of a century — not exactly what I’d describe as a resounding success.
A good read on this front is Robert Caro’s The Power Broker. It’s an 1100-page history of urban planning in New York. Before it was recommended to me, I would have thought you insane if you’d suggested I could actually read such a book, but it’s absolutely compellingly good. It’s the biography of Robert Moses, the king of urban planning in the New York metropolitan area for almost fifty years. It tracks his path from wide-eyed idealist to brutally corrupt and cynical bureaucrat.
There’s wisdom in what you say. Broadly speaking, the motivations that brought us paternalistic regulations on public smoking, consuming raw milk, occupational licensing, etc. also brought us childhood vaccinations and clean water.
Yet there’s something of a middle road here. I don’t want to plug Nissim Taleb too much because he does enough of that himself on Twitter and in his books, but he really has some great insights in Antifragile that would be quite applicable here.
For a quick explanation of why I think he acts like a douche in books and Twitter, from himself in Antifragile:
“Authors, artists, and even philosophers are much better off having a very small number of fanatics behind them than a large number of people who appreciate their work. The number of persons who dislike the work don’t count—there is no such thing as the opposite of buying your book”
A good way to summarize his political views would be something like: libertarian-ish with some hard exceptions for things that induce lots of systemic risk.
Exceptions:
“What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks.””
He’s libertarian-ish former partially out of moral reasons (freedom is good, in my reading of his work/twitter) but also because that tends to lead to well-adapted practices at the local level, that might not seem optimal, but are surprsingly stable in the long-term.
Another concept that comes along for the ride is scale. Lots of smaller city-states are not as optimal as a unified country but are less likely to all fail catastrophically simultaneously. What seems like redundancy is often more adapted to local conditions than a more optimized, leveraged, riskier structure.
From my copy of Antifragile, some good quotes:
“It is as if the mission of modernity was to squeeze every drop of variability and randomness out of life— with the ironic result of making the world a lot more unpredictable, as if the goddesses of chance wanted to have the last word.”
“More data—such as paying attention to the eye colors of the people around when crossing the street—can make you miss the big truck.”
“In an antique city-state, or a modern municipality, shame is the penalty for the violation of ethics—making things more symmetric. Banishment and exile, or, worse, ostracism were severe penalties—people did not move around voluntarily and considered uprooting a horrible calamity. In larger organisms like the mega holy nation-state, with a smaller role for face-to-face encounters, and social roots, shame ceases to fulfill its duty of disciplinarian.”
This is in line with his emphasis on skin in the game—simple heuristics for punishment are better than complex regulation. Better to execute an architect whose house kills a family than try to prevent it with regulation because the person building the house will then have his incentives properly aligned with the people living inside.
“Under opacity and in the newfound complexity of the world, people can hide risks and hurt others, with the law incapable of catching them. Iatrogenics has both delayed and invisible consequences. It is hard to see causal links, to fully understand what’s going on. Under such epistemic limitations, skin in the game is the only true mitigator of fragility.”
On black swans, tails risks:
“But they never notice the following inconsistency: this so-called worst-case event, when it happened, exceeded the worst case at the time.”
Chesterton’s Fence, kind of:
“there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding.”
“I feel anger and frustration when I think that one in ten Americans beyond the age of high school is on some kind of antidepressant, such as Prozac. Indeed, when you go through mood swings, you now have to justify why you are not on some medication. There may be a few good reasons to be on medication, in severely pathological cases, but my mood, my sadness, my bouts of anxiety, are a second source of intelligence–perhaps even the first source. I get mellow and lose physical energy when it rains, become more meditative, and tend to write more and more slowly then, with the raindrops hitting the window, what Verlaine called autumnal “sobs” (sanglots). Some days I enter poetic melancholic states, what the Portuguese call saudade or the Turks huzun (from the Arabic word for sadness). Other days I am more aggressive, have more energy–and will write less, walk more, do other things, argue with researchers, answer emails, draw graphs on blackboards. Should I be turned into a vegetable or a happy imbecile”
“It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it.”
“Had Prozac been available last century, Baudelaire’s “spleen,” Edgar Allan Poe’s moods, the poetry of Sylvia Plath, the lamentations of so many other poets, everything with a soul would have been silenced*….
If large pharmaceutical companies were able to eliminate the seasons, they would probably do so–for profit, of course.
*This does not mean that Sylvia Plath should not have been medicated at all. The point is that pathologies should be medicated when there is risk of suicide, not mood swings.”
It looks like maybe we’ve started to learn our lesson. Here’s the Boston Globe, calling for ‘No plan’ as 2030’s urban planning blueprint: Editorial in Ideas Section.
Personally, I want a playbook to quickly react to driver-less car technology if it becomes a big part of the future. One reading of how the State negatively impacts the poor: the state finds a way to tax or regulate any new productivity technology up to the point where the median worker experiences no real growth in quality of life.
The rich will still benefit, and be allowed to pay special premiums which will allow their middle school kids to be robo-chauffered anywhere on a whim. Meanwhile, a working class guy, who reliably makes 3 bus connections everyday to get to work, will still be waiting for his first non-novelty ride as Beacon Hill will go on a decade long cycle of assesses->debates->divides up the pie->debates->assesses… Just like how it took three years to get a 1 lane at the airport spray painted to “AppRides” when that is literally the largest and most important thing to happen to civic infrastructure during this time.
Basically we need a plan ahead of time, to give the productivity dividend from this technology to the people, not a new state agency. Inertia is the enemy there’s a new trick being learned by an old city. Roll it out fast, roll it out hard, iron out the details when problems become apparent.
Yeah, that was confusing for me too reading this review (before I got to the conclusion). The cities with “organic” natural tended to burn down with unacceptable frequency, that’s the reason building codes started to get traction. The displanned system of water supply meant that cholera outbreaks weren’t uncommon. Industrial farming did mostly outcompete the small scale family farms out of existence in the West (there are lots of problems, but the food is cheaper than ever).
I think the case would be more believable if it started with the conclusion, instead of starting with what sounds like equating all planning with Soviet agriculture.
It seems to me that formal education suffers from problems similar to industrial farming. The only book I know that addresses this is psychologist Peter Gray’s Free to Learn, in which it is argued that children learn despite rather than because of formal education. At an anecdotal level, from my own experience, and from watching my daughter develop over the last number of years, I’ve become convinced that this is true, at least for people of above average intelligence, but the actual argument in Free to Learn seems to depend to much on the notion of the Noble Savage and the results of priming experiments. So I’d like to see a more careful study.
About Soviet industrial farming towns. Of course the forced collectivization was a disaster. But in later years, ideology adjusted somewhat to reality, and the ratio between the different kinds of collectives (kolkhoz versus sovkhoz, for example) changed over time for the purposes of efficiency. Also, there is an old sovkhoz town not far from where I live, apparently not very much changed from the old days in terms of layout and architecture. It is kind-of-sort-of cute and compares very favourably to an American rust-belt town of similar size. As the farming was industrial, this seems the appropriate comparison. Indeed, you can hop on the very cheap commuter and be in Moscow, in an hour and a half. And I should hardly need to say that Moscow is incomparably more favourable than an American rust-belt metropolis. None of this is to say that Soviet industrial farming was successful. Just that, after Stalin died, it was not an unmitigated nightmare, and that the artifacts from that time are in some ways not inferior to the artifacts of post-industrial small towns in America.
Regardless of his prescriptive content, I found the idea that there are ways that government changes the environment so that it is easier to govern in systematic ways to be a very useful idea for my mental toolkit and well worth the cost of reading the book.
But another use of legibility is in innovation. Traditional societies often have methods of doing things that work well for reasons that nobody understands. But if that’s the case it’s hard to change things because you run a big risk of breaking something when you do. And enforced legibility, even if it’s not quite as good as what came before, sometimes allows you to change multiple things in different places at the same time to get out of a local maximum.
Yes. Raising standards of legibility often makes something worse before they make it better. But they do make it better in the end (sometimes). The below article is a very interesting case study of this in the fields of cartography and development economics. Two seemingly very distinct subjects.
http://web.mit.edu/krugman/www/dishpan.html
I like this thought. Legibility allows understanding the reasons why things work – doesn’t guarantee, but allows. The trade-off is between continuing illegible practice at a poorly-understood local maximum, or imposing legibility. Imposing legibility is likely to damage results at least in the short term but admits the possibility of reaching a higher maximum through revealing and better understanding the underlying causal factors.
It seems to me that increased legibility also makes totalitarianism more possible: tracking of web searches, license-plate cams, geotracking of mobile phones. Perhaps I’m missing the specific definition of legibility in this context, and the three things I noted are definitely far afield from a standard bushel basket, but I think the point is the same.
Not that far afield, really. What is the purpose of license plates, other than to make sure you pay your vehicle tax and ease of fining you if you break traffic ordinances?
Well, there’s the public safety bit. That’s important. Apparently medieval peasants didn’t think public safety an adequate trade-off for ease of imposition of state power. We have differing sensibilities.
Who’s “we?”
One side note — Bill Sparkman, the census taker whose murder is referred to in Section III, was not actually murdered, contrary to initial press reports. His death was ruled a suicide; the coroner found that he staged his death look like a murder so that his family would collect life insurance.
I guess they have different insurance rules there? Here you can collect on suicides, whether sane or insane(that is actually the contract wording) as long as the policy has been in force 2+ years.
The wonderful things you learn in insurance sales…
Apparently he bought the life insurance after “late 2008”, and then killed himself September 2009, so it would have been within 2 years.
While the links are broken, the Wiki article asserts that the insurance policy had been taken out fairly immediately prior to his death.
My (admittedly amateur and mostly picked up from crime novels) understanding was that the “no payouts for suicide” rule was to prevent people taking out large insurance policies (for their families) and then killing themselves shortly afterwards? Your 2+ year rule would get around that because people would have been paying in premiums and it would not be “bought this policy specifically to claim because I’m going to kill myself a week afterwards”.
The notion of life insurance is “suppose something unforeseen and unexpected happens, it’s wise to be prepared in case you die suddenly and out of the blue at some indefinite time”, not “you are definitely going to kill yourself the day after you take this huge policy out”.
I think it might also have something to do with murders? I vaguely have an idea that there was a mini-spate of “parents/guardians taking out life insurance policies on children then poisoning them” as well as spouse murders for the same reason.
You are correct. However, some more forward-thinking people wait the two years. I once heard a talk from a life insurance claims adjuster, and while a lot of her stories were funny ones about catching people who faked their own deaths for the insurance, she did mention that one of the sadder genres of claims is the poor family man who buys an oversized insurance policy and kills himself after two years and a day because it’s the only way to provide for his family.
I wonder if there isn’t something akin to Gell-Mann amnesia at work when we try to discover where local knowledge works: we can all think of examples within our own spheres but assume that everything else has been successfully subject to a greater degree of rationalization. I bet we can find examples wherever the gods of the copybook headings are afoot, particularly in politics and interpersonal relations. I think the tension is less between rational/scientific and local/traditional than it is between exogenous/theoretical and indigenous/experimental. In some cases of local knowledge, a reference to Chesterton’s fence or a colourful myth are just cover stories or mnemonics for a rational process that occurred in the past.
This reminds me a lot of Douglas Adam’s speech ‘is there an artificial god’, where he makes similar points. It’s a very SlateStarCodex-esque piece of writing, and I totally recommend reading the whole thing here: link text)
There’s a passage in the speech where he talks about rice farming in Bali:
For anyone that likes reading high modernism-aesthetics-bashing and traditional city-porn, this is a nice archive. Some more radical things as well… but I like the city-themed ones.
I lived in Canberra, a city planned in the 1930s by Walter Burley Griffin, and it was deadening to the soul to live there. No vibe, no life.
But that doesn’t mean the idea of planning a city is wrong. Like Wen Jiabao said, it’s too soon to tell. We are iterating the ideas for how to make cities, just very slowly compared to how we iterate agriculture.
Streets last more or less forever so cities get made once. And there aren’t too many of them, compared to fields under the plow. So we haven’t had many learning opportunities yet. The sample space is still small. Give us time!
One place where we can experiment spatially (and where we still see the excesses of centralised ‘rational’ planning) is in office layouts. Shout out here to anyone forced to ‘hot desk’.
The problem is not with using concrete as a building material; as Scott says, it’s cheap, durable, and strong – the Romans invented it, for goodness’ sake!
The problem is that after you slap up your bare grey concrete walls you then forbid any covering of the bare grey concrete walls and make a Principle out of forbidding people to paint, decorate, or do anything to modify the bare grey concrete. I’m not surprised graffiti sprang up as a response in urban areas, even the ugly “scrawling names” type not “this is an artform of its own” type – there’s a human habit to decorate our items of use, not merely leave them functionally bare.
Plain grey concrete may work wonderfully – or at least tolerably – in a perpetually (or nearly perpetually) sunny, brightly lit and blue-skied environment; in a cold rainy one, not so much.
I’m pretty sure grafitti is older than concrete.
Ancient Romans painted graffiti, and they also had concrete. Maybe concrete doesn’t cause graffiti but rather preserves it.
There is graffiti in the Pyramids, so yes.
And cave paintings before that.
Cave paintings are arguably wall art, not graffiti – it’s not defacing if it’s your cave.
Something that is hinted at in places, but which seems to get insufficient attention in this discussion; legibility isn’t just necessary for the government that wants to impose taxes everywhere, but also for the corporation that wants to do business everywhere. It is a requirement for capitalism as well as for central planning. And making everything local may sometimes work better than central planning, but it usually works much worse than capitalism.
“Making everything local” accomplishes some goals better than “capitalism”, and accomplishes other goals worse. It’s a tradeoff like everything else.
Whether one is “better” or “worse” than the other depends entirely on what one is trying to accomplish.
To the extent that the green revolution “works”, it may do so by drawing down on long-term stored nutrients in the topsoil of the American midwest, i.e. it may be a result of unsustainable practices. If so, we might eventually find ourselves with 10 billion people on earth and enough food for 5 billion. The green revolution wouldn’t look like such a success story to people born into such a situation. I think this is relevant to any comparison of the relative merits of local small-scale economies and global capitalism.
True. Capitalism seems to be one of the biggest factors in why modern westerners are dozens of times wealthier than their ancestors of a few centuries ago. As Marx said, capitalism’s obsessive focus on profit does indeed deliver profit. But there are other things that matter besides wealth, of course. On the other hand, I do not think the sustainability argument is particularly impressive; it doesn’t seem to be that uncommon for local practices to be, or to become, unsustainable, so it becomes a matter of comparing how often unsustainability results with each alternative, and how damaging the forms of it are, and I would need to see the proof that such a comparison would definitely end up favoring the local approach.
I think it’s clear that this would need to be taken on a case-by-case basis, which is actually my point.
A diversity of local approaches is in some sense less fragile than a global rationalistic approach because some of the local approaches will be more sustainable, and thus more long-term successful, and thus form the seed of more sustainable practices in the long-term. Darwinian selection in action!
But if a global rationalistic approach is unsustainable, then it will ultimately collapse and there will not be any competing systems to step into the void.
A global rationalistic predicated on drawing down a storehouse of solar energy embodied in fossil fuels over the course of a millions-year process is obviously long-term unsustainable. The fact that Mayan agriculture was also unsustainable doesn’t magically retroactively make industrial capitalism more sustainable than it actually is. But a much smaller proportion of humanity was dependent on Mayan agriculture than is dependent on industrial agriculture, and so the inevitable failure of the former might not have been as catastrophic as the inevitable failure of the latter.
I feel a bit put out that none of the local libertarians or conservatives are stepping up to defend capitalism, leaving it to poor old leftist me, but if I’m the only one willing and able to step up, so be it. Capitalism seems to be quite good at adapting and improving in the face of changing circumstances. This does not guarantee that it will always be able to adapt to future crises of its own making, but it does mean that forecasts of inescapable doom are far from certain. On the other hand, the absence of globalization does not guarantee the absence of global catastrophes; those can come about in other ways besides the global order screwing everything up. There are some possible future global catastrophes which capitalism, with its adaptability (and, remember, also vast resources) could overcome, which would destroy all the little local enclaves in a localized world. So, again, even if the only thing one cares about is existential risk, it does not seem to be crystal clear that capitalism is inferior to local approaches on that specific measure.
By “capitalism” are we referring to an abstract economic system whereby the rate of production of goods is increased through the purchase of capital through a system of loans which is made profitable through the use of interest?
Or are we talking about the actually-existing extensional hodge podge of regulations, compromises, and market actors that people tend to refer to as “capitalism” regardless of its actual correspondence to the abstract concept?
It’s the latter that has proven adaptable, but arguably it does not look very much like people’s abstract concepts of “capitalism”, and frequently works in spite of its resemblance to capitalism in the abstract.
So basically I need to know what I’m arguing against to go any further with this.
In the case of pure capitalism, or the pure capitalist aspects of the existing system, I can point to specific incentives that are essential elements of capitalism that lead ultimately to social collapse. E.g. profit motive incentivizes sociopathy, other aspects of capitalism remove incentives against sociopathy, and so sociopathic behavior increases over time. Capitalism is parasitic on trust built up through non-capitalist social processes; that trust can be strip-mined (e.g. Sears trading on their good name long after the company itself has gone to shit), meaning long term capitalism erodes features of societies on which its success depends.
Or, more simply, capitalism is oriented towards growth, and from a simple logical perspective, infinite growth is obviously not sustainable.
I mean, you can interpret the fact that a lot of ancient religions ban usury as superstition or as metis. Which interpretation you choose will be relevant to your perspective on capitalism.
Edit:
“capitalism” is the abstract concept, “capitalism*” is the actually-existing-thing
So we can spin a story about capitalism* being adaptable because, e.g., the New Deal saved capitalism from a crisis in the early 20th century US. But New Deal policies are directly opposed to capitalist principles. So to the extent that capitalism* is adaptable, it may be so only because it does not rigidly adhere to capitalist principles.
Not meant to be a true story about the early 20th century US capitalist economy, just an example to motivate the idea. I think it’s important to note, though, that the success of capitalism* can only be used as evidence for the efficacy of capitalism if it’s the resemblance rather than the differences between the two that caused capitalism*’s success.
I would have thought context would have made it clear that the discussion isn’t about spherical cows and frictionless planes.
OK, but in that case your arguments in favor of capitalism are all missing the step where you show that actually existing capitalism is successful because of its adherence to capitalist principles rather than their deviation therefrom. Hence the question and significant amount of text intended to clarify why I was asking it.
Sorry if I have offended you.
> But if a global rationalistic approach is unsustainable, then it will ultimately collapse and there will not be any competing systems to step into the void.
This is why freedom is so important, and why mandating “best practices” in law is dangerous. The global system is large enough for a lot of competing sorts of rationalist(or rationalish) approaches, but governments tend to spend a lot more time than they should banning competitors. Capitalism, properly understood, mostly bypasses these problems(not all – it’s vulnerable to sudden collapses of local maxima, or to insufficiently-internalized externalities – but most), but governments seeking legibility will sometimes wreck the system of checks and balances.
@wysinwygymmv, I find the arguments of Smith and Marx regarding the reasons capitalist economies are more productive than the traditional economies that preceded them to be largely convincing, especially in light of later history which seems to have provided ever more support for them. But I’m not sure it’s actually necessary to bring that up in order to make my case that your sustainability argument has gaps. I kind of feel like you are changing the subject (especially as you now seem to be bringing up criticisms of capitalism that I partly agree with; I am not arguing that capitalism is beyond question the best system ever and the purer the better, I am defending the Smith/Marx view that capitalism is better than the traditional economies that preceded it).
I disagree that “more productive” == “better”.
This is inextricably connected to my points about sustainability: in many cases, “more productive” undermines “sustainable”.
This has been the premise of my argument all along; from my perspective, I haven’t changed the subject.
However, I acknowledge that I probably have a different map than yours that puts connections where you have none, and that it might not be obvious how my arguments are connected to my premise from the perspective of someone who has a somewhat different model of political economy. This is probably a good explanation for why you perceive my sustainability argument as having “gaps”.
Just a note. Not all the villages that sit on the land that Corbusier made into Chandigarh were demolished. A little carve out called Burail remains. It is dense and winding and locals go there to go shopping.
https://thefunambulist.net/architectural-projects/proletarian-fortresses-the-corbusean-grids-anomaly-burail-in-chandigarh
The review reminds me a lot of my time in Shanghai. For those who have never been to Shanghai or any Chinese cities, the majority of cities in China are great communist monoliths, just a sea of grey concrete apartment blocks, utterly indistinct. This shouldn’t really be a surprise as most of these cities have probably sprung up from tiny towns and villages in a matter of decades and there was little other option but to pave everything over for apartment blocks. Nonetheless, it’s quite dispiriting to visit these places. The exceptions are the handful of cities that already existed in the 19th century, your Beijings, Shanghais and Xi’ans.
Shanghai in particular is an interesting one as large swathes of the city were given to the European powers in the wake of the Opium wars. A large chunk of the city centre is still referred to as the “French concession” (though the local government has outlawed it being called that and it is now referred to as XuHui). The traditional style of Chinese cities was through an arrangement called hutongs, which were actually a very early form of goverment central planning. I’m not too good at describing them so I’ll just drop a link to wikipedia: https://en.wikipedia.org/wiki/Hutong. However I guess that the ancient Chinese governments hadn’t heard of high modernist architecture so they’re actually quite lovely, though this may be a selection bias due to only the wealthy hutongs surviving today.
A large part of the Shanghai centre, including much of the concession, is built up into ‘hutongs’ – but note as the wiki says, hutongs are a northern phenomenon. The hutongs in Shanghai are not actually hutongs and are better termed as shikumen. In a sense many of the housing blocks are a fusion between the hutong layout with 19th century European design. Many of the surviving hutongs are simply stunning as a result and I would thoroughly recommend anyone who visits the city to spend some time just walking through the back alleys and tiny streets.
Of course, this being China, nothing is allowed to stay beautiful for long. The communist blocks have proliferated through Shanghai, and every day I was there new towers were being built. The grey concrete had largely been abandoned by that point but it didn’t stop developers constructing 8-12 identical, giant towers, though they seemed to prefer square grids rather than rectangular. The prime building space came from the old hutongs, often through forced relocation, or in some cases simply ignoring holdouts and building their monstrosities right around one isolated old building. The towers are much more orderly, more modern, with space for car parking and green areas between the tower blocks. Who wouldn’t want that instead of a cramped old hutong?
By far the worst of it is where the central state meets rapacious capitalism. There is a major street beginning at Jingan Temple and proceeding to People’s square which has become nothing but mega malls. They operate with a bizarre inversion of high modernist principles as well; design wise they all look different and some are actually quite impressive buildings, but the innards are all identical. They all have the same batch of big brand stores and luxury brands, the same collection of mediocre restaurants, the same overpriced foreign supermarkets in the basement. You’ll never have seen so many Gucci stores along one street. Even in a city as populous as Shanghai I still frankly have no idea how some of the malls stayed open there was so much competition. It used to really depress me, walking the streets and seeing another fine old patch of buildings being demolished to make way for that crap.
But on the other hand, most of what is happening is really quite necessary. Shanghai is so rammed that it cannot function as a city without building up, and up, and up. The hutongs really can be quite decrepit – I know, having stayed in one for a few months. They do have problems with plumbing, and electricity and internet. The streets and alleys are far too small for all the cars in the city.
This doesn’t quite map to the rich vs poor divide; pretty much anyone who had the fortune to be born in Shanghai and own property has become very wealthy just thanks to the growth of the city. It’s also hardly a perfect example of what the book describes – the hutong being early government planning, the Shanghai architecture being imposed by foreign colonialists, and the current development often arising without any clear planning, since the various parts of the local government either aren’t talking to each other, or a developer is just bribing their way around any issues.
If I had to make any conclusion, it would be that modernist city planning is just an alternative failure mode to the existing failing city. Metis will not solve the issues plaguing modern cities either.
One thing I kept thinking as I read this: “the Chinese state fixed this particular legibility problem 2000 years ago.” Maybe every le Corbusier should have a portrait of Lord Shang Yang in his office (and even, to a lesser extent, Confucius?).
Example: first Emperor standardized axle lengths, writing, and weights and measures throughout an area much bigger than France 2200 years ago (remember this, also, next time someone is tempted to tell you how backwards and illogical the Chinese writing system is). Of course, there was a rebellion 15 years later… but the reforms mostly stuck, confirming my sense that the best ruler to be is the guy who overthrew the guy who fought a bitter fight against all the established institutions: you get to keep his legibility without being blamed for the atrocities (see Sui–>Tang Dynasty).
Also, as much as I am highly sympathetic to the “organic, local knowledge beats expert planners every time” case, I also thought to myself that the Chinese kind of figured out this city planning thing a long time ago, too. Kyoto, for example, based on the design of Chang’an, and divided up into neat little grids labelled “street number one,” “street number two,” is nevertheless much more pleasant, imo, than Tokyo, which is basically an overgrown fishing village. Sadly, the horrific, sprawling, soulless suburbs the Chinese are recently given to designing show that this doesn’t scale, at least not directly.
The key seems to be to only plan broad outlines and let the locals fill in the rest. This may have been a result of the ancient need to design and build city walls (and walls within wall) as part of the state’s military defense, but not to micromanage everything that went on within them. This probably applies to a lot of other areas as well.
Western units go back to Rome with about the same amount of accuracy as Chinese units. An ell has been 1.5 feet for 4000 years.
Was use of these less consistent after the fall of the Roman empire?
I have read arguments that the “Dark Ages” weren’t really so dark in terms of overall economic growth and technological advancement–they were just periods when centralized institutions of empire fell apart and growth happened in a more diffuse, localized manner.
Maybe we should change “the Dark Ages” to “the Illegible Ages.”
Yes, China was probably more consistent across space, even if it was not more consistent across time.
The Dark Ages were a period of technological leveling. A low level of technology was spread far, but the heights decayed. The heights of technology, even agricultural technology, also decayed in Roman times.
The decentralization lead to more diffuse wealth and less concentrated displays wealth. But wealth had mainly been agriculture, so it didn’t diffuse the growth. The decentralization reduced principal-agent problems and produced better incentives, thus more growth. The tax farmer agents of the stationary bandit were worse than the roving bandits that replaced them.
The traditional account of 1000 years of darkness from the deposition of Romulus Augustulus in 476 to the Renaissance is complete nonsense but there definitely was a dark ages in the sense of real, evidenced decline in populations, living standards, and material culture. The actual period is highly debated, but it started as late as the 700s and was over by as early as the 900s. It also was far from uniform, with some places having earlier or later declines and revivals.
Illegible ages is actually a really good name. There is definitely a broader period where there was a pronounced decline in written records. This decline tended to to both precede material decline and outlast it, leaving the period from, again very roughly, 500-1000, rather opaque to modern historians. I’m by no means an expert on the period, I’m not even an enthusiastic dabbler, but when I read history from that people, it feels less like history than archeology, more like the period before 500BC than classical antiquity.
Modern historians call it the Dark Ages referring to paucity of documentation.
I think modern historians usually refer to it as the Early Middle Ages, not the Dark Ages.
I generally go by the Penguin Atlas of World Population History, although it may be a little out of date by now. On its figures, European population starts down at about 200 A.D. and starts back up about 600 A.D., with the Roman Empire barely cold in its grave. It continues to increase more and more rapidly until about 1300. That’s almost the opposite of the pattern you describe.
Do you have a more recent source for European population figures?
This also reminds me of historian Paul Johnson’s tracts against Social Engineering in his book Modern Times which I highly recommend. Interestingly his critique of these practices comes from a conservative and generally pro-state standpoint.
The question is to what extent is the green revolution and modern forestry a direct continuation of these failed attempts of the past? If very much so then it seems more of a cautionary tale about the necessary costs of progress or perhaps a call to implement progress more humanly than an actual indictment of high-modernism (with the Tanzanian farmers as an outlier). If not so much, the success of subsequent endeavors means the failed projects should highlight dangerous pitfalls for us to avoid.
Do we have any citations as to how successful the Meiji restoration was and how much damage did it cause in the name of progress on the way? It sounds like a very interesting data point and my only relevant knowledge is that pre-WWII Japan politics & society were weird (A general gunning down politicians because they tried to reduce military budget was acquitted on grounds of self-defense, popular support arrived in the form of self removed pinky fingers IIRC).
nobody had ever bothered to tell me before that they actually produced more crops per acre, at least some of the time.
Yep, had very similar experience reading about the breakup of feudalism in the East Asian countries (peacefully in Japan, Korea, Taiwan; not so much in China). Apparently the small farmers were able to out-compete even US farms on a per acre basis, bc the high capital-low labour mix needed the standardisation, whereas the peasants could (for example) put cash crops in between their staples and overall use their land more efficiently.
Indeed, the book this was in uses it as one of the big reasons there is a divergence between the East Asian and SE Asian (Philippines, Indonesia, country I’m forgetting) – the latter all kept the large landholders who tried to mimic the Western farming practices, but despite access to the better credit and capital couldn’t beat the high labour-low capital mix (keeping in mind that wages can make this unfeasible for running a farm in developed countries).
Also keeping in mind that a high-labour approach to food production needs to be significantly more productive to deal with the overhead food required by the people working it, in order to sustain a given non-farming population. A slight advantage is not sufficient.
As someone who used to go to pop festivals in my younger days, I used to enjoy browsing the part of the festival where various utopian hippies with One Big Idea would lay out their shingles. During this period I was transitioning away from socialism towards libertarianism, and I was starting to see how many of those ideas were bullshit. But the one that used to intrigue me (even though I’m not a “nature person” by nature, have no interest in gardening, etc.) was Permaculture. It does kind of make sense that if you want to “farm”, you really want to just create a chunk of more controlled, but still somewhat messy Nature, rather than monomaniacally cultivate a monoculture.
There’s something in our prehistory, with the “agricultural revolution” that went right, and some things that went wrong, and it has something to do with us co-evolving with plant (city) and animal (pastoral) monocultures, in a way that led us to form more insectile (ant-like, bee-like) mental and social structures (kind of a convergent evolutionary thing due to our mental plasticity, i.e we came to mimic social insects who protect and defend monocultures).
And somewhere in there, there’s a tug of war between our gatherer-hunter instincts and the ant-like mentality and way of thinking that we nearly fell for. And somewhere in here, is the battle between the true ruling class, i.e. those in possession of the Dark Arts (the “Brahmin” class, fiddlers with symbols), and the rest of humanity. IOW, the Brahmin class is more prone to “turning” to the insectile way of of thinking, and has been prone to trying to force it on the rest of us: intellectuals are peculiarly vulnerable to “LARP-ing rationality through rectilinearity” memes, and are able (through their ability with the Dark Arts) to try and “turn” everyone else.
To put this another way: history is the long, slow transition from rule by Faith to rule by Reason, but there’s an ersatz form of Reason that’s been mimicking Reason like an “Edgar suit” – Rationalization (which is that “LARP-ing of reason”). Rationalization in religion leads to dogma and holy war, rationalization in agriculture leads to disasters; ditto in politics, etc., etc.
And rationalization is the echo of that shift-which-never-quite-occurred, from hunter-gatherer to social insect; it’s the memetic engine behind conformity, it’s the Brahmin-level initiator of the transmission belt that ends up with the masses virtue-signalling, mutually agreeing to the quality of the Emperor’s finery, falling in, knowing their place, toeing the party line, etc.
So that’s how the LW thing fits in: it’s one of many attempts to keep Reason distinct from Rationalization, to keep it on-mission. So when Reason does actually guide, we get the good results, when Rationalization guides, that’s when we get disasters.
(The difference is mainly in terms of being at least prepared to seek disconfirmation, not just eagerly hoovering up all apparent confirmation. Nothing wrong with a bit of planning and ordering, so long as you consult and listen, so long as you’re alert to potential disconfirmation and check to see if the results are the ones you wanted, and so long as you’re prepared to let go of the Grand Plan if evidence shows it’s not working or getting the opposite results. It would be interesting if this discipline were introduced to politics somehow – oh I don’t know, say, like a rule whereby you don’t introduce a new regulation unless you repeal a couple of old ones … 🙂 )
As a fiddler-with-symbols, I’m feeling a little disparaged here, o ye of Capital Letters.
You get told you’re a fucking wizard and you think it’s disparaging because it acknowledges the possibility of becoming an evil wizard? Look at the up side! You’re a fucking wizard!
Evil, I’ll take, but he called me a bug wizard.
Just keeping you on-mission dude. Like with the wolf/sheepdog analogy on the manly man plane. Grrrr 🙂
Spooky…
I am using this book as an in-class example in a history class I’m visiting today, along with David Graeber’s “The Utopia of Rules” and Weber’s “Protestant work ethic…”
I’ve had these examples planned for more than a week.
I think left-wing anarchists tend to agree with Scott’s views and their objections to right-libertarianism usually come from an intuitition-derived sense that right libertarian public policy is more conducive to High Modernist ends (that they empower large corporations, mostly) instead of independent peasantry ends.
It seems like one lesson, which you briefly touch on in the Tanzanian farming story, is that giving options is more effective than enforcing decrees because of the economic calculation problem.
If an agronomist swings by your village and says “hey guys, guess what I’ve got this new variety of wheat! $20 for a bag of seeds” then the locals can get the benefits of scientific modernity if they choose to. And, more importantly, they can choose not to pay him again the next year if their harvest fails as a result. The agronomist has both additional information gained through the price mechanism and also has a tangible incentive to care about local conditions.
None of those apply to the agent from the State Agronomy Department who shows up and orders that from now on only the new variety of wheat will be planted. There’s no feedback at all between the farmers and agronomists, to the point where it barely matters whether the new crops grow or not. Lysenkoism was hailed as a success year after year despite producing less than nothing: it literally did not matter to him or to his Soviet sponsors that his agricultural methods failed.
This ties in with what Sniffnoy was saying above about “closing the control loop”.
Yeah, between push and pull mechanisms, there’s a robustness to pull where people choose what works for them compared to push where people have ‘choices’ imposed on them. Pull allows the particulars of a person’s situation, best known to them, to guide their actions. Push ignores the particulars of a person’s situation.
Godzilla!
Those maps seem to compare a city with one 20 times it’s size.
perhaps better to compare to Milton Keynes since they love their rectangular grids but which is also a very livable city.
http://wiki.openstreetmap.org/w/images/2/28/MK_Cake_1b.png
http://www.greendigitalcharter.eu/wp-content/uploads/2012/10/Milton-Keynes.jpg
You can make anything look awful if you look only at the failures.
Yes, it’s probably bad to take that patient off the drugs he uses to cope but for the other 3 patients behind him in line who keep having bouts of psychosis whenever they start taking street drugs it’s entirely the correct advice.
It sounds like High-modernism is taking a sim-city approach to civil engineering. a nice neat grid of roads, highways and subways with a police station every 3 blocks and a fire station every 5 and a school every 4 and it overall being about a hundred times easier to keep everyone fairly happy than if I tried to just let everything grow organically.
Is Milton Keynes a success? Maybe people who live there like it, but I don’t think it has a good reputation from the outside.
it’s regarded as sort of quaint and amusing but appears to be a fairly popular place to live in real terms.
I doubt it was as successful as the designers hoped but it seems to have succeeded fairly well along a reasonable number of axis.
Isn’t the bad reputation of Milton Keynes (and the new towns in general) precisely that it is a very pleasant place to live, with short commutes and nice public places full of friendly young professionals, and that this is a faintly embarrassing and risible thing to want, the geographic equivalent of elasticated pants and a volvo car?
The British have the same attitude towards pleasant urban environments and friendly neighbours that Americans have towards seatbelts and functioning healthcare systems. Sheer perversity is underrated as an explanatory factor for popular opinion.
This is the best introduction to Ribbonfarm that I’ve read.
The book contained some great historical tidbits, but I’m not sure what overarching lesson I learned from it.
Should I be the one to point out that looking for overarching lessons rather than accepting that life usually presents a series of contingent and limited lessons that are necessarily contextual and dependent upon their fixed positions in culture and time is kind of thinking like a High Modernist?
Is the best way to answer this question via a two stage process of attempting to determine what the ideal sort of person to raise this point would be, and then assessing to what extent deBoer matches that ideal? Or should the two stages be fused into an integrated analysis of the appropriateness of deBoerish characteristics vis a vis the raising of this point?
This sounds like a pretty complicated problem; we should probably just leave the issue half-raised for safety reasons.
I think Scott touched on that point with his closing:
http://www.businessinsider.com/check-cashing-stores-good-deal-upenn-professor-2017-2
I’ll admit the price transparency almost makes me want to start using one, but how many times do you have to pay %2 on a check to start looking for alternatives?
Also, is tipping bank tellers even legal?
Depends how big the cheques are, and how many $50 NSF fees the alternative requires you to pay. Remember that most businesses pay a 2%+ “check cashing fee” to Visa or Mastercard on most of their transactions and call it a good deal.
I think the critical statement here is the one that agriculture (or urban planning or taxation or whatever) is 90% engineering and 10% farming (or construction or economics of whatever).
The author of the book looks at a history of people applying this principle and failing miserably (Brasilia, Tanzanian, farming, etc.). From this he concludes that the principle must be wrong and that highly academic abstract “science” is useless for solving real problems. As Scott (our Scott) notes, this is very similar to a specific kind of anti-science sentiment we see today: the “let’s ask random people instead of scientists” mindset.
To be fair to the author of the book, he uses this conclusion to make a larger point about how governments used this aesthetic to make things easier to tax and so on, which I think is valid, but I want to address the conclusion itself.
High Modernism, or techno-optimism, or whatever you want to call it – the belief that science and rationality and engineering can actually solve difficult problems – is condemned by the author of the book as being cute theories formulated by egghead academics who’ve never been in the real world and don’t know what they’re talking about. This is an age-old anti-science canard and the author’s justification isn’t new either: “we tried your science and it didn’t work.” However, I think the author makes a critical error in evaluating the failures of e.g. Tanzanian agriculture. The problem isn’t that science doesn’t work, it’s that we didn’t do the science right. The agricultural scientists in Tanzania (or the Le Corbusier-inspired architects) believed that they had discovered the One True Principle that governs all of agriculture (or architecture). They applied it, and found out their theory was incomplete. So the scientific method tells us to refine the theory, account for features we didn’t think of, assumptions we made (such as the soil quality in Tanzania, or the fact that people like street corners and a small amount of “bustle”) and construct a more complicated theory and continue iterating. It does not tell us to throw out the scientific method. That’s ludicrous. Scott (our Scott) acknowledges this at the end, pointing out that modern planned cities and agricultural science seem to actually work. That appears to be because some people have been refining the theory in the past 100 or so years – we didn’t listen to the author of the book and we didn’t throw away science and BEHOLD – it got better because that’s what science does.
I won’t deny the hubris of the High Modernists: believing they had everything figured out and their way was the One True Way, and they may have done more to hurt the Grand Scientific Endeavor than they did to help it, as most people now associate certain parts of science (like architecture, for example) with boxy Brutalist buildings that make everyone miserable, and that now contributes to anti-science sentiment (the nonreligious kind of anti-science sentiment, I should clarify).
There’s no reason that we can’t use the vast apparatus of human intelligence to design cities with the maximum amount of “bustle” and busy street corners – but that probably won’t work either. We should probably be maximizing for citizen satisfaction, or maybe productivity or something. The actual goal doesn’t matter. The point is that “planned architecture” doesn’t have to be boring rectangular grids. We’re more powerful than that. Science and rationality remains our most powerful tool against Moloch – the fact that some naïve scientists believed they were strong enough to defeat Moloch and failed miserably doesn’t mean we shouldn’t keep working at it.
Okay this turned into a rant, but I see this kind of anti-science all the time and it bugs me a lot.
I think you’re over-interpreting a little.
First of all, you kind of do a “no true Scotsman”-style argument here:
The point that’s being made is that science is always wrong at first. So even if you do science right, you end up with wrong answers at least some of the time (more like almost all the time).
And then if you try to replace an ancient, tried-and-proven system with your preliminary scientific results, you often cause widespread human misery. So don’t do that.
I don’t think that’s “anti-science” — that’s caution in terms of applying science to everyday life. There are more approaches to knowledge and to everyday life than just science, but science’s success over the last few hundred years have caused a kind of “scientific imperialism”. People understandably resent the frequent intrusions of science into domains in which it doesn’t belong, so they push back.
Again, not “anti-science”. Usually “pro something that’s not science”.
I’d recommend Feyerabend’s Against Method. It will probably drive you crazy, but if you read it with an open mind and try to understand where he’s coming from, I think you might get a lot out of it. (Speaking from personal experience as a former atheist I-fucking-love-science type.)
Each of these three things seems not so horrible by itself, but taken together, to add up to something nasty:
“Science usually fails the first go-round”
“Scientific progress proceeds one funeral at a time” and
“People whose ‘scientific’ plans go horribly awry not only never lose their jobs, they usually get promoted.”
The other Scott (the author in question) admits that point as well:
Sure, but this kind of common sense attack on science is one we need to have real respect for. “We tried and it didn’t work” “But the one you tried was overconfident and silly – here, I have the right answers this time, I promise!” “Yeah, the last guy said that too. And so did the one before him.” is a really common failure mode, and people bowing out of the argument entirely is an eminently sensible reaction. If we aren’t going to get it 100% right the first time, we need to admit the limitations and be up-front with them, so that we don’t smear science by trying to practice it.
I’ll agree 100%. I think people pushing claims to a much higher confidence than they should causes a lot of (e.g. climate) skepticism.
On medieval land ownership- I recently went to a talk here in Cambridge (1.0) about an archaeology project in a medieval cemetery that was recently excavated as part of building works. The speaker mentioned records we have of a man named Gilbert Curteis, who gave land to St. John’s Hospital so his adult son Robert could be buried there.
The thing is, the St. John’s Hospital cemetery was the second cheapest place to be buried in Cambridge (after a pauper’s grave). So Gilbert didn’t need to leave the hospital much land at all- IIRC it was a piece 12 feet by 8 feet, and not even in the city! I imagine the hospital simply rented the land back to Gilbert and collected rent from him (and his heirs).
On compulsory surnames, Mongolia tried to bring in surnames recently and found that so many people picked the same name (Borjigin, Genghis Khan’s clan name) that they were essentially pointless. On the other hand, Mongolians had surnames until they were abolished under Communism…
Similarly, I’ve heard the ubiquity of the Vietnamese last name Nguyen is from a ruler with that name–not having such a large family, but rewarding his family in some way, so others tended to take on the surname to be favored.
And I’ll put this anecdote here–I used to work with a Filipino man with a last name of “Israel.” He explained that at one point a Spanish official had stopped a recent ancestor and required a surname, so the Filipino man opened his bible at random and plopped down his finger on a word, and that was that from then on.
Naming people (though usually newborn children) by opening a holy book to a random page has happened in various places. There are the stories of Puritans doing it, leading to such names as Notwithstanding Griswold. Elsewhere, a Sikh friend in college told me that the Sikh tradition was to open their holy book and give the child a name starting with the first letter of the page it opened at.
Thus proving the merits of randomization-as-inspiration over true randomization for important decisions.
If you have a group of people with surnames, in the next generation, the popularity of each surname will drift slightly from what it was in the previous generation, purely by randomness. Every so often, one will drift to zero, and be permanently eliminated.
If there is no way for new surnames to enter the mix, the effect after many generations of this is that large groups of people will have the same surname.
Someday, we will literally all be Keynesians.
I read it and was looking for a bit more of a thesis, but when I found his prescriptive summary, I thought it made sense–even it it wasn’t revolutionary. Page 356:
>1. Take small steps.
>2. Favor reversibility.
>3. Plan on surprises.
>4. Plan on human inventiveness.
Nice find.
His first two steps are very good and clearly stated advice. His third is also important but phrased in the most frustrating way possible: making it sound like you’re planning for a specific surprise, rather than coming up with resilient plans. His fourth I’m not sure what that means.
There’s also, IMO, a missing zeroth step:
>0. Figure out what your limits are.
A lot of the failures mentioned here were planned under the (rather insane) assumption they had total control of the situation. See Le Corbusier:
The freedom a planner has when drawing up his plans is effectively infinite: it’s the freedom of the artist, where your only limit is imagination. But if you want to actually do anything with your plan other than framing it and putting it on a wall, you need to have a firm grasp of what your constraints are.
I think step four is supposed to be something like “plan for other people to take over your system and change it, quite possibly for the better but certainly in some way.”
The Molochian core highlighted here seems to be our endless willingness to sacrifice most of our human values in favor of other, more urgent human values.
Like, it seems like life was much much better for hunter-gatherers, except for the infant mortality, human sacrifice, tribal warfare and other horrible parts that a modern person would do almost anything to avoid.
And as a corollary, our lives seem so gray and empty compared to theirs, but on the other hand, I can’t possibly quantify how much I prefer having alive children to the alternative. You could even say that I gladly make the sacrifice of living in a soulless suburb to keep them that way.
There seems to be implicit sacred values built into us, and we sacrifice all lesser values for these. We do this collectively and repeatedly and reliably, as soon as technology provides us the opportunity. Even when we don’t need to, but it only feels like we do.
Despite finding EA logic strongly aesthetically appealing, I find that my brain still thinks that I need to devote 99.9% of my wealth to securing an optimal future for my children, even though they would be absolutely fine with 20% of it. I imagine this process repeating itself across everyone and for all time, and easily see how we get ourselves into these situations.
Sometimes it’s not even that, though. We are very capable of sacrificing important values to heuristics that don’t apply. A simple example: I am part of a student organization that purchases and resells cans of soda and similar-sized and -priced snacks/drinks. We have a particular sale price, which is about half the cost of buying the same item from a vending machine. Yet, I am constantly fighting against the tendency of my fellow organization-members to buy *the cheapest possible soda*, which then sits in our refrigerator for months at a time, rather than buying the slightly-more-expensive-but-still-profitable kinds of soda that sell off in a matter of weeks.
It’s sort of the opposite of cost disease, in some ways – the price stays the same, but quality drops arbitrarily low, because people keep following the “try to get a cheap one” heuristic. In a small system like my organization, this just makes me angry and prevents me from drinking things that I like. In a larger economy, too much ‘buy cheap” disincentivizes quality, which either degrades quality of life or increases cost of living over time as things break more frequently, degrade faster, and work less well.
Recognition is the first step to change.
Apparently it takes many years to get to the second step. Possibly more than a human lifetime.
I just read a podcast transcript you might find interesting, relating to the ability to get to the second step. It’s an interview with Naval Ravikant, a Silicon Valley investor with an interesting philosophy on life. He talks in part about about habits and the monkey mind (starting on page 7), and ways we might change or control them. I highly recommend reading the whole thing.
You can develop charitable habits if you desire that. What works for me is every time I get a raise I devote a portion of it to charity. I’m used to and pretty content with my current standard of living, so if it doesn’t change I’m still happy with how my money is allocated to my priorities. I’ve made charity a priority relatively recently, starting small and allocating more money to it with every raise. I’ve picked organizations that have been evaluated as effective and align with causes I feel affection for (St. Jude’s Research Hospital, for one) and an increasing proportion of my income goes there.
You can get to the second step today.
Thanks, I’ll check that out.
There’s lots of stuff about modern day that makes life better than hunter-gatherers, aside from avoiding those horrible things you mention. Like, I’m into hang gliding and I’m happy people invented and honed a reliable, high performance, durable piece of equipment that allows me to fly in the sky. I like to travel and experience new places, and cars and planes make that much easier. I like to gain knowledge and learn more about other people and how the world works, and nothing beats the internet for that. If I want to get away to nature, on my terms, I can do that, but I can still have heating and air conditioning on days with uncomfortable weather. I enjoy privacy and I suspect there’s more of that for modern people than hunter-gatherers.
Did hunter-gatherers do human sacrifice? I thought it was more of an agrigultural thing.
Quick Google search suggests “maybe”. I know that they would euthanize the elderly and feeble by various means, which is kind of similar.
I suspect that in the American context, railing against “the elites” is simply a different phrasing of this experience. It’s not that education per se is a bad thing. It’s the attitude that having a particular set of education makes you a better person that is the problem.
Yeah, I’ve spent the last few years trying to convince elites of this. Very few of them actually get it. It’s a deeply frustrating process.
There is a lot of hate on rectangular girds. But New York (objectively the best city in the western hemisphere) is to a significant extent a panned city with rectangular girds. It also includes planned large parks, public square, public transit, planned architecture restrictions requiting light to reach street level.
The problem with a lot of the planned cities isn’t that they are planned and legible, it’s that they are badly planned. NY grid has small enough blocks that people can easily walk to street corners. Streets are narrow enough that pedestrians can cross easily. Many neighborhoods are zoned with first floor commercial and upper floor retail so that there is activity at street level and stores and restaurants are close to where people live and easily walked to. Brasilia does the opposite of all these. Wide streets, long blocks, non-overlapping zoning. Its not the central decisions, it’s the bad decisions.
Without offence to New York, I don’t think that statement’s obvious enough that you can just assert it like that.
It is to a New Yorker 😛
But the reason lots of cities are badly planned is that planning is hard. Because there’s lots of hard to quantify factors. Something not known at first and discovered through trial and costly error.
Also, factors often change over time. Cities that were successful with mostly foot traffic might not have been as successful when automobiles became ubiquitous, and attempts to accommodate automobile traffic in older cities pretty much always disrupt patterns of foot traffic and the secondary effects of those patterns (e.g. “eyes on the street”). Jane Jacobs discussed this specific issue a lot in Death and Life of Great American Cities. You can’t plan for everything that hasn’t been invented yet.
This is a form that occurs naturally wherever population density is high enough, but actually tends to be outlawed by top-down urban planning. New York’s best buildings are often grandfathered in and noncompliant.
My sense is this as the default development model right now for any place that is starting to achieve a certain level of density. And the even bigger trend is to recreate communities that put daily amenities (grocery stores, restaurants, entertainment, etc.) within easy walking distance of housing, whether it’s single family, townhouse or apartment (frequently all of these in a mixed-use multi-income development).
That’s the preferred zoning model at the moment.
I actually loved living in New York because of the the grid. The whole place felt like “my” village because I’d discover a cool place and understand immediately how it fit into all the other cool places I already knew and how to find it again – if you can count, you can navigate. Compare to London which just feels like a bunch of disconnected anecdotes.
And this principle applies to time as well as space. Remembering “dates”, though individually mechanistic and impersonal, help you navigate history and understand how one part connects to another.
You must have lived above 14th street.
Also applies to e. g. Kyoto v. Tokyo.
Three points about New York from a near lifelong resident:
1) New York’s grid was devised before the advent of the automobile. This is really, really important. I don’t think grids are so bad in and of themselves – lots of cities are gridded – but cities that are designed expressly for the automobile tend to not work very well as cities. I’m not anti-car as such, but designing the city specifically for the car was a very bad idea. New York’s gridded streets are normal city streets, not eight-lane semi-highways as in Brasilia. It’s the latter that’s the problem.
2) Much of New York – including most of Midtown Manhattan – is actually kind of ugly at street level. The grid isn’t the main reason for this, but it doesn’t help.
3) Even I would quibble with the statement that New York is “objectively the best city in the Western Hemisphere.”
Park Avenue is an eight-lane divided highway. Eighth Avenue is 4 lanes or more (one way). In general the avenues are quite wide. The major crosstown streets are also quite wide, typically two through lanes each way.
You’ll note that my statement was about eight-lane divided semi-highways. The word “highway” was really important, there; the “eight-lane divided” part was meant more illustratively.
If you want to get technical about it, Park Avenue is eight lanes only if you count the parking lane; I wouldn’t. Ditto for the crosstown streets, they’re (mostly) six, but only if you count the parking lanes. Most of the other avenues are three to five traffic lanes. Even if you do count the parking lanes, all of these streets have traffic lights on literally every block; they may be eight lanes in some cases but they are in no sense highways, or even semi-highways.
More importantly, the word “highway” usually implies (at least in eastern US usage) high speeds and some form of controlled access (though maybe not fully controlled). This does not apply to any of the avenues in Manhattan, even though they are generally pretty wide.
Manhattan’s avenues are not very Corbusian. At most, they are a half-hearted nod in that direction.
Edit: Two further points…
-As others have pointed out, Manhattan’s blocks are small enough that you could easily walk from one end to the other (although they’re big compared to, say, Philadelphia’s, and they’re oddly long east-west, they’re still small enough for walkability).
-Perhaps specifically using the phrase “eight-lane” was a bad idea. My point was that cities with grids designed in the 19th century (or earlier), such as New York, typically do not have this High Modernism problem, although they can be monotonous. The reason for this is that nobody was designing cities for the automobile in the 19th century or earlier, because the automobile did not exist.
Edit 2:
If there happen to be any other NY history geeks around here, this book is a fascinating history of the Manhattan grid.
The discussion of Le Corbusier and Chandigarh reminds me of a modern classic: Magnasanti. https://www.youtube.com/watch?v=NTJQTc-TqpU
Essentially this person achieved unheard-of population density and efficiency in a SimCity game, via obsessive planning and years of effort. And documented it in a very entertaining YouTube video.
Yes ! I normally stay away from “me too” posts, but this time I couldn’t resist. That YouTube video actually gave me nightmares back when I watched it the first time.
Somewhat uncharacteristically, the comments on the video are pure gold:
“is this a letsplay by satan himself?”
“Why does everyone in Magnasanti die before they reach the age of 60? Why is there no one alive in Magnasanti who remembers the time before the Overlord?”
“This is perfect utilitarianism. I’m impressed.”
“Why is this video fucking terrifying?”
“Shit got real when they used mathematics paper for planning.”
Ahah I’m sure someone somewhere saw that as “coded racism”. Was it supposed to be a dog-whistle?
Anyway, the kind of peasants James Scott is talking about seem very different from modern “peasants”. His examples help to illustrate the importance and complexity of local knowledge and how they relate to just surviving. But there’s a way in which being a sustenance farmer is complicated and risky, where working the register at starbucks just isn’t. In fact this job is probably fully automatable and will be in the future.
The kind of local tacit knowledge poor people have is vanishingly trivial. Even the checking example is what… a few cents on the dollar? This is a far cry from high modernism screwing up the food supply and killing people.
I’m not in favor of high modernism, but modern peasants really do seem dumb. I imagine it would be pretty hard to argue against a technocrat armed with all the relevant statistics that poor people should be banned from buying lottery tickets (basic figures here). Modern peasants don’t seem to hold much useful local knowledge, and in fact likely hold lots of harmful local anti-knowledge.
Thank god it’s politically incorrect to talk about ways in which the poor might be partially responsible for their condition.
I think you’re making a very subtle mistake here.
I agree, and suspect most people would agree, that the poor make poorer choices than the wealthy do in their own lives. But it doesn’t follow that the wealthy can effectively manage the poor’s lives at a distance and with no skin in the game.
Because that’s what we’re talking about. Let’s say you banned the lottery: then what? That 9% of their income is still burning a hole in their pockets and I promise you it’s still not going to go into savings accounts or 401K plans. What you would need is for the poor to have better habits and you can’t legislate that.
Very good point! I can easily imagine they’d go elsewhere for their “fix”. It might even be more wasteful than playing the lottery.
What about requiring peasants to have a valid “financial stability” card before buying sins? Like you need $2000 in a savings account and no outstanding debt before they’ll let you buy lotto tickets.
Not in favor of this. Just imagining from a technocrat’s perspective.
I just don’t see it ending well. Remember the bit about the English tax on windows and doors leading to poor ventilation for centuries?
If we could put a little accountant on everyone’s right shoulder to counter the little devil on their left shoulder, that would be a good solution. Maybe cheap AI tools will help with that, like a FitBit for personal finances? Just spitballing.
Either way I don’t think that good spending habits can be implemented in a top-down way over a whole population. At least not without a lot more pain than is worth the effort.
To be fair, I think it was a French tax.
England had it as well. Lots of old houses have patches where there obviously used to be windows which were then bricked up.
England also had a wig tax, which apparently delivered the coup de grace on English wearing of periwigs in the late 18th century.
@Mr. X on Window taxes: The English window tax only taxed additional windows above a relatively large number (initially 10, but never went below 7) so is unlikely to have had quite the effect of the French one.
Interestingly, the French window tax (not necessarily a tax on French windows!) was imposed by the Directory in 1798, not by the monarchy…
Or it might effectively be the lottery minus state participation, in the form of illegal numbers rackets. Some of which are a better deal than the lottery (in modern times, I understand they not infrequently use the weekly state lottery numbers as an easy source of trusted winning numbers, while offering better payouts), but which will likely involve corruption and will inevitably fund other criminal enterprises.
On the other hand, there’s an argument that keeping the state in an adversarial relationship with the numbers games has some advantages. States becoming addicted to lottery revenues, and therefore actively advertising and promoting playing, probably isn’t in the interest of the public good. Nor is pretending that it’s specifically funding some specific cause (usually education) while eliding the fact that money is fungible.
Considering that virtually all lotteries are state-run, I’d settle for my government just declining to scam poor people out of their cash.
It’s not a scam, it’s a tax on being bad at math……
I imagine it would be pretty hard to argue against a technocrat armed with all the relevant statistics that poor people should be banned from buying lottery tickets
I agree that lotteries are terrible, but what vices or pleasant fripperies does the technocrat have that could equally be scrutinised as Bad For You, Bad For Society? And who is going to make the technocrat give that up the same way they can ban poor people from disposing of their spare income in undesirable ways?
Anyway, the point about asking the 19 year old single mother in the Bronx is that once you’ve built your projects or tower blocks, she is the one having to live there, and she has to cope with all the things you forgot (because it never occurred to you, because you’re not in that situation) – like, so if she’s atomised from her old neighbourhood community and isn’t living near her mother/sisters/cousins/friends, who minds her child when she has to do the shopping? So she has to bring the child in a pushchair with her when she shops.
And because there are no corner shops/local groceries in your rationally-planned neighbourhood, she probably has to take public transit into the city centre/commercial district to do her shopping. That limits her to (a) can she manoeuvre the buggy onto and off a bus (b) how much she can bring back with her at a time.
So much time gets eaten up by trivial tasks because they can’t be performed in a straightforward manner because the idea behind the grand central planning and the actual experience of the people using the services or living in the buildings don’t cohere.
If technocracy fails in predictable ways, then a possible solution is more technocracy. In your example we clearly should have thought to put (90 degree) corner stores every so often! If you observe that we miss something else, we’ll include that too. Yay technocracy!
A general thesis against technocracy has to say that there are fundamental reasons why you can’t just get all the details right with sufficiently good planning. A lot of the article is laughing at people who only tried the 90 degree angle solution.
Ultimately the problem is that failure is expensive. As the article says for Brasilia, “The available budget was in the tens of billions.” – some percentage of that has to be regarded as wasted, let alone the opportunity cost of not having a better city on that site.
Maybe you can retrofit the tower blocks with corner stores, but surely this is more expensive than building them in the first place – all wasted resources that have to be attributed to the failure of planning.
As long as they’ll get it right eventually, the technocrat dream will never die.
The general thesis is that if you’re not the person living in a given context, you might be more educated on average, but will on average make worse decisions. Since you’re a technocrat for a large government:
1. your mistakes are enforced with coercive power
2. with other people’s money
3. are versed more slowly than individual mistakes.
Larger organizations ‘learn’ slower than individuals– on average, they will adapt worse than individuals. That’s a good reason to be skeptical of them.
If technocracy fails in predictable ways, then a possible solution is more technocracy.
Indeed, and learning from mistakes is no shame. But the larger point is that the heady days of centrally planned, rational solutions to messy human problems did it all from a top-down perspective: it wasn’t “who are our customers? who are the people who are going to live here? what are their needs?”, it was “we will build the perfect modern progressive environment, put people into it, and they will change to be perfect modern progressive people” with the addendum (tacit or explicit) that if they didn’t change voluntarily, they would be forced to change – like it or lump it.
Chesterton wouldn’t necessarily be against the built over the natural, but he would disagree with the forcing part, and the part where the rich/powerful put impositions on the poor/powerless that they would never countenance others in authority putting on themselves, and be damned to it being “for your own good”.
From “A Defence of Detective Stories”:
Chesterton once noted
“A little while ago certain doctors and other persons permitted by modern law to dictate to their shabbier fellow–citizens, sent out an order that all little girls should have their hair cut short. I mean, of course, all little girls whose parents were poor. Many very unhealthy habits are common among rich little girls, but it will be long before any doctors interfere forcibly with them.”
You can’t solve the problem of predicting events (including hypothetical ones) in location A, using location W, where those two are completely unrelated to each-other. Planning is just acting to fulfill predictions: you can’t plan a city you don’t know anything about, unless your basic plan is to build something relatively simple and then back the fuck off while the citizens customize it.
There are too many details. A human being, who can not possibly make a pencil, certainly can not plan a city to sufficient detail.
Satanic paedophilia, they tell me.
Probably an acquired taste – I’ll stick to booze.
> The kind of local tacit knowledge poor people have is vanishingly trivial.
Move to a ghetto and see how trivial it is to live without it.
> I imagine it would be pretty hard to argue against a technocrat armed with all the relevant statistics that poor people should be banned from buying lottery tickets (basic figures here).
I wrote a blog post about this. tl;dr: they’re not buying lotto tickets to produce a net economic benefit, they’re buying lotto tickets because it’s the only plausible-seeming way to give themselves hope for a better future.
Implying I don’t live walking distance from wal mart.
Anyway, maybe there is a bunch of tacit knowledge it takes to get by. When it’s safe to take the bus, for example. But my point is that you actually can automate a lot of stuff. A starbucks register takes infinitely less skill to operate than the sustenance farming described in the OP.
I’m sympathetic to this argument. But I’ll bet the intelligentsia aren’t. Their rebuttal will go: “Yes but…” and you’ll lose the persuasion battle because every Sensible Person (TM) knows you shouldn’t play the lottery when you’re paycheck to paycheck.
It takes less knowledge to work a cash register than to be a farmer, no question. That said, I really wonder where you live that Walmart is considered a marker of ghettoes – I find they’re usually in the nicer parts of town, and usually much more common in smaller towns as well.
And yes, the technocrats would be unsympathetic to that argument. There’s a reason I tend to use phrases like “smarmy assholes” when I get too far into seeing their worldview. Even when I agree with them(and I mostly do on this one), there’s a distinct lack of empathy.
Houston. Shrug.
In their minds, they probably think poor people should suck it up and delay gratification on this issue. They’ll have a whole extensive literature on the glories of delayed gratification to draw from.
One thing I wonder about is why the poor buy playstation 4s instead of playstation 2s. Like, why don’t they just stay 5 years behind mainstream gaming and save a bunch of money? To the technocrat, this seems like a no-brainer. But then it got me thinking about all the magic of participating in something “new” as opposed to trying to solve puzzles and quests that millions of people finished ages ago. Still, not enough to overcome the technocrat’s intuition that these people should be more serious about delaying gratification and getting out of poverty.
I think a big chunk of it is pride. If you’re the cheapass who buys tech a decade out of date to save a buck, you’re a nobody, especially when your buddy over there just bought himself an Escalade(hey, it may get repo’d in a month, but that’ll be a heck of a month). In a world where true success is seemingly impossible, reaching for the superficial markers of success looks like the best deal you’re going to get. They might be willing to delay gratification, but they have to believe it’s coming someday.
@Volume Warrior: I do something in line with buying 5 year old game systems. I generally don’t watch new TV (some exceptions for content that is only good while topical, like humor news shows). Instead I’ll wait several years after a show is done and see if people still think it’s great. If people think a show is great 5 years after it’s off the air, then it was probably legitimately good and not just hyped.
This has spared me the time sink of watching Lost, later seasons of Battle Star Galactica, and many others. This is being frugal with time rather than money, but it’s the same basic plan. Use a time delay to avoid costs.
I know a woman online who sometimes buys tickets now. But when she and her husband were poor, it was every single time. For the dream.
What a great review and contemplation. Thanks.
I have a feeling that many of the community here will find bits and pieces of validation, from the marxists pointing to communal peasant living as preferable to western colonialism, to libertarians pointing to carefully planned but disasterous government intervention, to the conservatives of the Chesteron sort, subsidarity favoring tradition over heedless progress. A coalition of neo-liberal out groups; however the technocrats do have sanitation in their favor, and that (and similar improvements) counts for a heck of a lot. Almost enough to justify the hubris.
By the by, awhile back there was a discussion of the word technocrat here, and some confusion about why anyone would have negative connotations toward a word meaning rational governance. This essay is a good exposition on that.
Is legibility the motivation behind Imperial Japan’s introduction of Japanese surnames to Korea? I’d been told that party line was that Koreans had had “too few” surnames, and the Japanese had helped out by “offering” new ones, but the motivation never really made sense to me (until I read this post, potentially).
I think the most obvious lesson is the importance of respect. Yes, expert knowledge is a real and useful thing. But experts are not Plato’s philosopher-kings, they’re just regular people who’ve focused their lives on particular topics. That isn’t a bad thing, of course, but it’s not as good a thing as experts like to think it is. Ordinary people, despite having zero formal education on a lot of specific topics, have still lived in the world for quite a while and seen a few things. Their error rate will be higher in a given field than an expert’s, but the reduced error rate for things closer to your own life and your own experiences will counteract that quite well.
In other words, stop talking smack about all those “mouth-breathers” in the other political party. They know more than you think they do. Perhaps not quite as much as you, but enough that you should stop and listen instead of bowling right over them. (And just to be clear, that’s a generic “you”, not aimed at anyone in particular)
I really appreciate this review. I had heard of Seeing Like a State but had put it on my far future reading list. This review has convinced me to move it up the queue. Thanks to Scott (Alexander) for a good summary and discussion.
I’m not a left-leaning anarchist, nor have I read the book yet, so I hesitate to comment too much on the author’s message or intent. That said, in response to Scott (Alexander)’s comment that “there’s not much left” of the thesis after the Green Revolution etc. (which admittedly was the first thing I thought of), it sounds from the review like there would be. As I might expect from an anarchist, the lesson is not that science and innovation are bad; it is that forcing change is bad. From SA’s review, he noted that one of the best outcomes in Tanzania might be that the High Modernists came up with a great idea, whereupon they would find that it had leaked into the community and had already been creatively implemented and improved. In other words, don’t stop studying and trying to improve; if it makes sense, it will spread. If it doesn’t, it won’t. Don’t force it. The Green Revolution spread without force, so it and other modernist successes don’t seem to contradict generally a thesis of “don’t force it.”
It also occurred to me that a further unspoken critique of this model is that, if outside people – other High Modernists, perhaps – optimize for “building an army to kill you and steal your stuff,” then it doesn’t really help if your practices are better or more sustainable. The ability to mobilize tax revenue in the medieval period was a key driver of military success; and most military campaigns, for centuries, consisted of “wander around looting, killing, burning, and terrorizing the opponent’s peasants.” So the modernizing impulse might be locally inconvenient to a 15th century French peasant, but yet still better than the alternative of doing it the old way and waiting for Henry V’s marauding longbowmen to come steal your livestock and burn your fields whilst ostensibly combating the Valois kings, because Henry might have been better at extracting tax. (Note: just an illustration, since historically probably it was the reverse – the French taxed more easily than the English). I assume that would fit in with SA’s priors about “Molochian” incentives.
There are three problems with simply opposing High Modernism:
1) Those part of local cultures that resist large coercive organizations can interfere with large voluntary organizations. For example, a local culture that despises literacy can make it harder to hire local tax collectors. It also has obvious disadvantages.
2) The people with local knowledge include the local oppressors. A more legible system makes it possible for the local tyrant and the central tyrant to oppose each other and give the lower classes some breathing room.
3) It’s possible to oppose the superficial aspects of High Modernism while still centralizing power. Planning for curved streets instead of straight streets is like getting a hangover from scotch and soda and, as a result, giving up soda.
For a beautiful take on these themes in micro see Stewart Brand’s “How Buildings Learn”. Wiki https://en.wikipedia.org/wiki/How_Buildings_Learn and a 6 part BBC series available on youtube.
From Wiki: Brand “fully rejects the “center out” approach of design, where a single person or group designs a building for others to use, in favor of an evolutionary approach where owners can change a building over time to meet their needs. He focuses specific criticisms on modernist innovators like Buckminster Fuller for making round buildings that do not allow any kind of additions or internal divisions, Frank Gehry for making buildings that were hard to maintain, and Le Corbusier for making buildings that did not take into consideration the needs of families. Brand was also critical of French development during the 1980s which did not take local conditions into account and ended up not serving their purpose, like the central Library which had to take money away from buying new books to deal with the heat produced by so many windows.
Brand stresses the value an organic kind of building, based on four walls, which is easy to change and expand and grow as the ideal form of building. This embracing traditional box design as the optimal structure puts him in direct contrast to thinkers like Buckminster Fuller who proposed geodesic dome as a better solution for buildings.”
Philadelphia has a planned rectilinear grid. So does Manhattan above 14th. So does Washington, D.C. though you’d never believe it if you ever tried to navigate there. A lot of cities have grids and were never shunned for that reason. Many cities (notably Philadelphia) even have blocks of identical housing and were never shunned for that reason. So I’m not sure the disdain for grids is so well-founded.
No, there’s not an inherent problem with grids. But the obsession with grids is part of a bigger habit of mind that is summarized really well by the phrase “rectilinear grid fetish”. relevant link
Also, I tend to think the grid parts of those cities are mutually dependent on the parts of those cities that aren’t grids. Also, Boston is an unplanned city, but the Back Bay is largely a grid (it was built more recently on landfill) and it’s one of the more economically vibrant areas of the city. But I don’t think if you put downtown Manhattan or the Back Bay in the middle of an empty field that it would be a successful city. Cities are ecologies, and thriving ecologies tend to have a lot of distinct parts that interact in complex ways — ecologies are all about boundaries, so I suspecta grid with non-grid boundaries would basically always work better than either all-grid or all-not-grid. (Above essay is relevant to this point too)
I wouldn’t call the Back Bay vibrant. I would call it very high-priced and rich. Actually, the term “frou-frou” applies very well: it deals in high-priced goods and services that most people don’t need, want, or purchase very often, but it at least does so with far more style than your usual financial district. The result is that you know the people who work in the office buildings are gulag bait, but you don’t want to demolish those lovely churches, the comic shop, or the sushi restaurant. After all, you can kinda sorta imagine using those some day.
So is San Francisco, without regard to the hilly landscape. Which rather adds to the character of the city.
Peruvian Economist Hernando De Soto (https://en.wikipedia.org/wiki/Hernando_de_Soto_Polar#Main_thesis) discusses the problem of slums. How would Scott discuss a slum? It is anarchy perhaps in its purest form, combined with poverty. The government seems unable to improve a slum without something resembling destroying it, rebuilding it, and giving the slum-livers free homes. The people in the slums certainly live in an anarchic setting, yet what a life?
De Soto argues that by providing property rights within a slum, those who live there will develop an incentive to invest their capital (what little they have) into building something more robust. Over time a slum can evolve into simply a poor neighborhood. Slum livers could invest into their shacks, work together to build infrastructure, and develop a small but true amount of equity.
This, of course, would require the state to bestow upon them the right to own property, as well as an institutional force to defend this right when other citizens steal or damage their homes.
In the end it’s all about property rights.
>Actually, one of the best things the book did to me was make me take cliches about “rich people need to defer to the poor on poverty-related policy ideas” more seriously.
George Orwell probably wrote some of the greatest work on this point, both in ‘The Road to Wigan Pier’, and ‘Down and Out in London and Paris.’ As he lives through the daily grind of poverty, he highlights all these small institutional barriers that drag you down. My favorite point of his, is how he speculates that the government could create small farms for London Tramps, and let them live and farm there. While the farms might not even break-even, it would still be better for everyone than the problem of city homelessness and vagrancy. Obviously this won’t work as well for the schizophrenic alcoholic, but there are a lot of Americans who are ‘down and out’ on the streets, and would benefit from a structure. Could this be induced without infringing on their ‘liberty’? Hard to say…
As far as the review, the author seems to almost get there, but fails at the last step by viewing anarchy as the solution. The real solution is to build a state that recognizes property rights, but allows individuals to develop on their property rights with little regulation (heuristic: As much regulation as possible to make up for Coase Theorem frictions).
Is the state unwilling to do so, or unable? Maybe it requires “legibility”. How do you get that without “something resembling destroying it, rebuilding it, and giving the slum-livers free homes”?
Thomas Jefferson imposed a super legible grid of property boundaries based on latitude and longitude on all land in the U.S. west of the Appalachians.
You can see it out of an airplane window.
It’s more efficient than the medieval system of defining properties used on the East Coast, although it’s less charming.
@NatashaRostova
Already been tried:
https://en.wikipedia.org/wiki/Society_of_Humanitarianism
There were various issues. The idea was that the people would become productive and pay back the costs, but this didn’t work out too well. So the project ran out of steam due to a lack of money. Also, the small farms turned out to be too small to be viable, so they had to be scaled up, which made it hard to help many people.
The backers also got disappointed because the vagrants didn’t disappear from the cities (a decent number didn’t want to go or went back, disappointed).
So after a short while the focus mostly shifted to penal colonies for vagrants and the like. After a couple of decades & after the industrial revolution really got going in The Netherlands, wages were much higher in the cities and it became much easier to fight poverty there, rather than in rural areas.
Here’s my 2002 interview with Hernando de Soto, which I think clears up some common questions about his work:
http://www.upi.com/Top_News/2002/05/08/Interview-De-Sotos-plan-for-the-poor/80441020891151/
Dear God yes. Seconded for epilepsy.
Seems to me that there is an information content threshold analogous to the Nyquist limit :
1. The information content within an optimized urban plan’s “signal” is high.
2. Traditional / metis urban planning is analog in that it is a perfect representation of that signal, but relatively impervious to analysis.
3. High-Modern / scientific urban planning is digital, in that it is an analytically-empowered and therefore imperfect representation of the city’s “signal.”
4. Scientific / digital urban planning must “sample” the signal to an appropriate degree or it does not adequately reproduce or analyze that signal.
5. Brazilite is essentially aliasing in the domain of urban planning
I’m surprised no one has mentioned either Dunbar’s number or private property yet (although Alexander nods at it when he mentions small c communism. The whole first section of the review is basically just a re-capitulation of the idea that there are advantages to living in very small groups (that don’t rise in population above our ability to keep track of everyone by first name) where everyone has common ownership of resources.
From S.A.s description, this is not a book that supports the idea of anarcho-capitalism very well, which leaves mes surprised that so many ACs tend to recommend it highly.
I found myself frustrated in the early sections of the review that, despite what is being claimed, there is nothing unique about statehood that leads to the problems being described. Rather, they are simply a natural result of large integrated populations. Sure, you can keep your insular community, even in the midst of a large, dense population, if you refuse to integrate, but that comes at a large cost. Although, I suppose it is possible S.A. is underselling the arguments of the book.
And of course, all I could keep thinking of, over and over, was, “What have the Romans ever done for us“?
The best part of that bit is that it’s rooted in the Talmud.
>The whole first section of the review is basically just a re-capitulation of the idea that there are advantages to living in very small groups (that don’t rise in population above our ability to keep track of everyone by first name) where everyone has common ownership of resources.
I think there are other ways to consider this. It seems what is happening is that when given property rights, and autonomy, communities are able to organically form institutions and practices that are optimized at a granular level of information, and the result of individual interactions. It’s similar to Hayek’s information problem, as well as Ostrom’s empirical study of endogenous institutions often outperforming centralized ones.
The problem arises when as states grow, somehow an intellectual abstraction occurs, that tries to derive and optimize organic/endogenous institutions from first principles and logical reasoning. When they impose this on the set of people, it breaks shit.
Nobody “gave” the communities property rights. The land existed, the community developed organically, and there was no claimed single ownership of things in the commons. There wasn’t a property right to give, because none was ever claimed.
And that works fine when you are an insular community with enough communal property for your needs, and every community within fighting distance is also an insular community with enough resources for their needs. As soon as this is not the case, shit breaks down unless you engage in coordination above the community level. You analysis of cause and effect gets them backwards, I would argue.
You write this (I read this) glibly, but this common sense advice isn’t obvious to everyone. I think the Greeks had it right when they repeatedly pointed to hubris as man’s most serious and insidious failing.
And yet, I have found that the lesson the Greeks transmitted was often less “watch out for hubris” and more “everything is hubris”. This is probably true, at least in a manner of speaking, but it is also rather useless. The territory is of great importance, but do not call it a map.
“The closest analogy I can think of right now – maybe because it’s on my mind – is this story about check-cashing shops. Professors of social science think these shops are evil because they charge the poor higher rates, so they should be regulated away so that poor people don’t foolishly shoot themselves in the foot by going to them. But on closer inspection, they offer a better deal for the poor than banks do, for complicated reasons that aren’t visible just by comparing the raw numbers.”
I just want to note, that your reason for believing this, as far as I can tell, is one article. One article is not a good reason to disagree with expert consensus. Beware one study. Beware even more if study isn’t a study but a semi anecdotal report form one field researcher.
Also, I’m not even sure the article says what you think it does. The fact that something has benefits as well as costs and that people choosing it have reasons they can articulate does not mean they have good reasons.
Someone may state that they value a more simple fee structure that they never need to think about compared to a 2% drain on all earnings. But they may in fact be wrong, it might be that an extra 2% earnings would improve their life more than the hassle of figuring out a fee structure and filling out paperwork to open an account would detract from it; but the hassle is a cost now and the 2% earnings boost comes in over time. People behaving in a short term easy long term harmful way would not be shocking to anyone who’s every watched humans do anything.
In the absence of capital (in some form or other), that higher cost now might literally be unpayable. Sometimes you need money fast. For some people, “sometimes” is “all the time”. Saying that they would have more money if they did this other thing first doesn’t help someone who needs money *now* and can’t wait for the fix to go through, and will *never* be able to wait, and has in any case heard that about five hundred times from various people despite the fact that this someone thought of it themselves.
> I just want to note, that your reason for believing this, as far as I can tell, is one article.
One article, plus eight billion check-cashing establishments who get business from a sufficient number of competent adults spending their money on check cashing to keep their doors open. The alternative is to assume that poor people are just fine with spending their very limited money on things that provide them with no value.
This seems a little like question begging since the person to whom you are responding provided a plausible mechanism by which people might engage in activities that are against their own best interest. (And, empirically, it seems undeniable that people often do.)
I admit, assuming adults are generally capable of acting like adults is a pretty fundamental part of my worldview, so I tend to be biased towards assuming explanations of why people have reasons to do X over reasons why people are all a bunch of fools for doing X. It’s the same reason I dislike arguments of the form “All members of party X are evil/stupid/stupid and thereby mislead by the evil/etc!”, or “We need to be paternalistic towards poor people, because despite all our best efforts, they continue not spending nearly enough money on high tea and golf!”.
Most people manage much harder problems than “Where should I deposit this cheque?” every day. Assuming that they’re smart enough to raise children but too stupid to realize that fees cost them money seems implausible to me.
I see people smart enough to do complicated thing X but completely incapable of doing simple thing Y every day (crucially, including myself), so it seems plausible to me.
Fair, but in most of those cases it’s either rational ignorance(usually in the form of trading money for time), personal preference, or occasionally pridefulness. It’s also not normally such a widespread thing that a single type of incapacity can support a large and visible industry – about the only example I can think of that’s anywhere close is PC help for old people, but that’s something that came along late in life for them, unlike the concept of getting money.
Who are the experts you are referring to? And how do we know they are experts? Are we talking about people who successfully raised a family of 10 on government assistance in the neighbourhoods in question? Or are we talking about people who are experts in getting papers published in academic journals of sociology?
Pre-modernism was/is for the most part sustainable on local annual solar load and local capital. Modernism is applying many more btu’s to any given task. They’re barely comparable at all. Even the Green Revolution is just applying remote btu’s (fuel and fertilizer) and remote capital (transport and specialization) to the task of food production. High-energy vs low-energy. Modernism only “works” as long as there is sufficient btu’s available in transportable form that can be moved to the area you want to make modern. It is better to be more modern (energy exploiting) for obvious reasons and making more energy available makes people better off as long as that additional energy is permanent.
Well, it may be or it may not be permanent. Fossil reserves are definitely finite, but my own speculation is that our society will eventually move to nuclear, which is for all practical purposes infinite; nuclear can certainly generate sufficient energy to sustain an energy-intensive modern society until it destroys itself.
{AARGH: YOU EDITED!}
I apologize, I was re-reading and realized I hadn’t quite made my point of permanent low-energy vs temporary high-energy. That said I’m not very confident in nuclear (or really any high-energy source) in the long term net energy equation and am interested in what pieces of the modern toolkit work in a low-energy multi-generational society.
BTUs are a big pat of it, but not the only part. Specialization-based efficiency, and its natural partner the industrial-scale production operation, still provides substantial gains even with no additional energy – look at the huge wealth of renaissance Venice from its glass production and widespread trade, for one example. (Poor by modern standards, but substantially richer than their contemporaries, due to these phenomena.)
Not really disagreeing, Venice is coastal, so additional btu’s are available in the form of water transportation and fish calories, so I would expect the available energy load to be higher than inland, and so more “modern” things like specialization are possible.
How do you quantify the btu’s of “water transportation”? I don’t dispute that there are energy savings associated with it, just like there are energy savings associated with the invention of the wheel, paved roads, railroads (but require energy investment to build these things), but I’m not sure how easily a number can be put on it.
Why british thermal units?
Because imperialism, FUCK YA!
Are those Le Corbusier quotes representative? Were they originally in English, or have they been translated oddly? Because that’s villain-monologue stuff.
Similarly, I grew up in an Eichler, and to this day remain a fan of slick rectilinear modernist houses. The Case Study homes are also great. Maybe it’s a California thing.
(But I also quite like Le Corbusier’s private houses, so maybe not.).
I’m currently a grad student at UC Irvine. The opinion of most of my friends is that Irvine is a safe place to grow up or to have a family, but for anyone else it’s really sterile and boring.
For example:
– They did at least mostly avoid the perfect rectangular grid, but everything is so spread out that you need a car (or at least a bike) to get anywhere. And while the few large malls are nice, they’re completely disconnected from everything around them.
– There aren’t any cemeteries; Irvine residents apparently want to believe that death doesn’t exist.
– There aren’t any homeless people. The police round them up and dump them next door in Santa Ana.
– There are plenty of Starbucks, but very few independent coffeeshops with any character, or other places for people to hang out. (The grad student association at UCI revolted against this by opening their own pub on campus.)
– The houses are all painted in varying shades of beige.
I can see how all of those could be positive things for some people. But I don’t think I could stand settling down in Irvine after my girlfriend and I both finish our PhD’s. We want a life that’s varied and interesting, not comfortable and beige.
Ah yes – leaving Irvine for exciting towns filled with cemeteries, easily accessible malls, homeless people, independent coffee shops and brightly painted houses! 🙂
Sorry, don’t mean to give you a hard time. But these are odd complaints. And the one that sounds like a real problem – no places for people to hang out – is kind of buried.
Rectangular grids have been out of vogue in suburban planning for a while; the new norm is curvilinear roads forming semi-enclosed enclaves of single-family housing, with a standard-issue suburban park (field, swing set, picnic tables, dozen or so trees, optional duck pond) planted every few blocks.
The goal is to foster an urban planner’s idea of family-friendly bucolic community — there are few entrances and exits to keep traffic on residential streets low, there’s no commercial zoning for the same reason, and it’s walkable as long as you only want to get to the park or.a neighbor’s house. But if you want to see anything but ticky-tacky houses and half-grown shade trees, you need a car and plenty of time, and since the plan doesn’t allow for anything that would actually involve interacting with your neighbors, the intended community never develops.
I think that’s an older model.
The current model incorporates typical daily commercial activity, frequently in a central hub that is conveniently walkable. Streets are winding and short.
Or you have apartment or condos built on top of commercial storefronts.
Or both.
I grew up in Columbia, MD, which was developed in the late 60s. The first of your current models describes it pretty closely. Streets were definitely winding and short, and things were pretty walkable. My elementary school was about a mile away, through a mix of residential side streets and off-road foot/bike paths. My high school was about half a mile the other direction. There’s a big commercial district right next to the high school (just outside Columbia proper), and a smaller shopping center two miles the opposite direction.
@HBC — I live in an area with a lot of new construction going on. It roughly follows the model I outline above, but usually with townhouses or duplexes instead of detached single-family homes. No commercial space. I specified single-family because that seems to be more the norm nationally– my cousin just moved into a newly built one, for example, and they start becoming common a little further outside the city center than I live.
I hear a lot of talk about mixed use and walkable cities and transit-friendly development, but it doesn’t seem to be being converted into action in my neck of the woods. Maybe it is elsewhere, I don’t know.
My impression is the greenfield development in the outer suburbs and exurbs are still following the cul-de-sac model, while redevelopments of the outer parts of cities and inner rings of suburbs are more likely to try for the walkable mixed use pattern.
@Nornagest:
We probably need to distinguish between planning and residential development.
You are laying at the feet of urban/suburban planners that which may properly belong at the feet of consumers and real estate developers.
In places where the city/county planners are stronger, my guess is that you see much more of “mixed use”, “walkable community” and the like. Perhaps I am wrong about that, but I know that even 20 years ago the town I lived in had a running battle with developers because the planners and city council had a requirement that new neighborhoods be connected to existing neighborhoods, but developers wanted to build single access point, many cul-de-sac, type developments. (And of course the adjoining neighborhoods would also be up in arms, not wanting to be connected.)
So, where you live now, do you have any sense of what the battles between developers and town-council,city planning look like? Is the town-council primarily composed of Republicans or Democrats? Do council members run and win on “pro-growth (laissez faire)” messages?
In order: the only fights I hear about are over whether a development gets approved at all (there is very strong NIMBY sentiment around here); supermajority of Democrats; no.
@Nornagest:
If I understand you correctly, it sounds like the planning office holds little sway, and there isn’t really an overall plan for how development will occur. As long as a project doesn’t seem likely to fall afoul of NIMBY, it’s likely to be approved?
Given that, I will reiterate that you may be faulting planners when they aren’t responsible, as they aren’t being given any power. The planning function, as a forward looking enterprise, isn’t something that’s being performed.
Obviously I am taking some guesses here, but it seems likely.
That’s not an assumption I’m prepared to make. The development fights that make the news or come to my eyes through other means are NIMBY-centric, but that doesn’t really tell me much about how powerful the planning office is — just that the local media and whoever originates outraged Facebook links think that fights over new development’s form, for whatever reason, are less provocative than fights over whether new development is allowed to happen. That could be because there are less of the former, but there are plenty of other possibilities too.
Word on the street is that new projects have to go through many layers of approvals, some of which have to do with neighborhood character etc., but I don’t know how many of those steps are substantial and how many are basically just hoops to jump through or solicitations for what I’ll politely call fees.
In any case, I didn’t intend to make a distinction between city planners and the people that do subdivision layout — obviously they have somewhat different motives, but they can be treated as a black box for purposes of what actually gets built.
@Nornagest:
The people who do subdivision layout are real estate developers and are doing because they can sell houses at a profit. They are not incented to think about the long term character of the city or the like. I don’t think in this conversation you can treat the as similar to planners.
The developers have their own metis, but the lessons learned have to do with time-on-the-market and sale price.
Me either. I definitely see a lot of suburbs getting on bored with a mixed-use, walkable downtown area, but the suburb as a whole has the same design as it always has. People do not seem willing to give up their giant green lawns and huge footprint homes.
I also think people romanticize “community” a great deal. I live in a relatively walkable suburban region (I think walkscore gives it like a 65), but people mostly hang out in their homes, and I rarely if ever strike up a conversation on my daily walk commute, walk to the grocery store, walk to the library, or walk to downtown restaurants.
Now, what walkability is great for, is meeting up with all your close friends, if you all happen to live close to each other. “How I Met Your Mother”-style friendships where you meet at the same bar every night is great/possible if you all live in the same neighborhood! Or on the same college campus! Not so much if the neighborhood isn’t walkable.
But that’s a….uhhhh…small portion of most of our lives. Maybe not my sister-in-law, who is spending something like $2.5k/month on a rent-controlled NYC apartment that looks like a Soviet worker hut, but for the majority of us who want to have kids and actually tend to be slightly introverted, suburbs work out pretty well.
What irks me at the moment are the protected bike lines in downtown Chicago, which do not seem to be used much at all, and are taking up extremely valuable downtown real estate. Maybe in the future we will all start using bikes to solve the last-mile problem, but right now they aren’t doing anything but stroking Rahm Emmanuel’s ego.
Roughly my situation. I walk for twenty minutes most days, on the advice of my cardiologist. Frequently have “nice weather” exchanges with strangers. Once I got into a long conversation with a neighbor who happened to be the sister of a previous owner of our house, once a shorter conversation with a neighbor who has a magnificent persimmon tree (and other fruit trees) in her yard, but I think that’s about it.
This bothered me when I read it yesterday and I couldn’t figure out why.
If people want interaction with their neighbors why don’t they seek it out for its own sake? They’re right there – that they’re all that’s there should make that part easier, not harder. In TV shows people have things like neighborhood barbecues, or just talking to Flanders or Wilson or whomever over the fence.
I wonder if the availability of different kinds of entertainment have helped reduce neighborhood cohesion, separate from any design considerations. My parents (Baby Boomers in a rural area) talked about how they used to go to the neighbors–though “neighbors” here meant at least a half-mile to the closest house, and usually further–to play cards once or twice a week. I understand that dinner parties used to be a lot more popular than they are today, as well.
That takes effort and is frequently awkward. Better to build things so as to encourage spontaneous interaction. (This also reduces the effort and awkwardness of deliberate interaction later.)
The TV shows you’re watching were conceptualized, if not actually written, by Boomers or early Gen-Xers, so they reflect the cultural standards of that time. Since then the culture seems to have shifted, and I’m not entirely sure why — though the Death Eater concept of high- and low-trust cultures has something to it, I think.
But if you want communities and they’re not culturally self-perpetuating, you need to incentivize them. Those incentives aren’t there in the modern suburbs, which is what I was trying to get at — you need to actually go out of your way to get to know your neighbors, so most people don’t.
For Nornagest:
Where are you saying that you don’t have to go out of your way to meet your neighbors (compared to the suburbs)? Certainly not in the city – I lived in NYC for about 15 years. My wife and I moved out to Long Island in July. I’ve met more of my neighbors in the last nine months then I ever did in all the apartments I lived in in NY put together.
My neighbors on LI pop by to welcome us to the neighborhood, or we stop and chat on the street, or stop by when we’re both shoveling snow, or mowing lawns, or exercising; their kids come by to offer babysitting or lawn care, we run into each other at the local bars and restaurants, all kinds of things. Some of this is “going out of the way,” but not going particularly far. On the other hand, for probably 12 out of my 15 years in NYC, I wouldn’t have been able to pick out of a lineup the people who I physically shared walls with.
I think culture and the details of the built environment matter more than suburban vs. urban as such; large-scale planned suburbs are just particularly bad because they don’t allow for many of those details. When I lived in the city (in a medium-density mixed-use neighborhood), I didn’t know all my neighbors, but I’d get to know some of them just by running into each other doing laundry or getting coffee or tacos or what have you; I currently live in an older suburb near a commercial district, and that’s maybe half a step down but still not too bad. But when I lived in a subdivision, I didn’t know anyone, and I’m starting to see virtually identical subdivisions going up all around me. I don’t see this as a positive development.
I’ve never been to New York, but it has a reputation as an isolating culture.
Nornagest: You’re fucking right about some of the planned suburbs, I’ll give you that. I think the rule is pretty simple: if you (1) aren’t living in a rural area, and (2) you absolutely have to drive to get to any commercial area – say it’s more than a couple of miles away – you’re fucked, community wise.
I DO see some of that on Long Island, come to think of it. I’m thinking of friends who live in an absolutely beautiful town right on the water. It could be in a movie, managing to be quaint and elegant and beautifully proportioned with just the right amount of stuff and not too noisy and Jack Kerouac used to drink there and there’s some commercial fishing left. It’s got plenty of cool.
But the TOWN is on the water – all of it. The houses extend out over miles and miles, so you can technically live there while still being a 25 minute drive, which is where they are. We went out with them in town after they’d been living there for about two years. It was the second time they’d been there. Yikes. Spread out your commercial space, people. It’s all good, no bad.
Right there, behind closed doors, in a society that doesn’t have a strong norm for knocking on the doors of strangers and asking if they would like to interact socially. Even if we had such a norm, there would be the problem that the closed doors and associated walls make it impractical to observe whether the neighbor in question is busy with something that one would be intruding on with an impromptu social request. The physical distance between the room in which you are being entertained by a digital monitor and the room in which they are doing the same, is not the limiting factor here.
If there were a pub on the corner where most men stopped by for a pint and some polite conversation after work, that would give everyone enough familiarity with everyone else’s circumstances to build on. Or if we hadn’t invented air conditioning and television to move most of our default leisure activities indoors. If we still allowed children to run around the neighborhood making their own friends, whose parents we would then have to be on speaking terms with. But we don’t.
To the extent that suburban social circles still exist, I think they are dominated by parents of similarly-aged children who regularly interact with each other at school-related functions and the like. And since these parents all have cars and are comfortable using them in every other aspect of their life, each circle consists of a random smattering of households across the local elementary school’s footprint, not the contiguous households of a single block or cul-de-sac.
@Nornagest
What gets neighbors knowing each other in your typical American suburb is kids. You have a set of kids from the same neighborhood who go to the same school and know each other, the parents meet that way, and the adult relationships develop from there. If you don’t have kids, what are you doing in a suburb (unless, like me, you hate people and/or like space; in the first case you don’t want to know your neighbors anyway)
I’m wondering if part of what has changed is that increased mobility and better communication technology make communities less geographically defined.
My wife and daughter spend two evenings a week in SCA activities (early music and renaissance dance), some other time in church activities, all of which involve social interaction, none of it with people within easy walking distance of us. I spend a lot of time here.
Seconding Lasagna’s post.
I’ve been to Irvine and it was fine by me, but 1) I don’t much care for cities in general, which is a special case of the general different strokes for different folks deal, and 2) it is undeniably not built in a slick rectilinear glass-and-steel Cal Mod style.
When you first brought this up in the Mount Misery review – the possibility of getting rewarded at work for behavior that isn’t successful – the first thing I thought of was a Pauline Kael movie review I read ages ago.
I don’t remember what movie, but it was a wannabe blockbuster that flopped. Think something like Fantastic Four (but obviously not that). She barely wrote about the movie, but instead addressed a general question: given that these very expensive but boringly generic movies fail pretty often, why do they keep getting made? Why not try something different, since the same doesn’t actually work very often?
Her theory was that people don’t get punished for failures if they can point to a formula that they followed precisely. So if the formula is “bankable star + freshest hot actress + CGI + ‘you know, like Die Hard but in a [blank]’ = box office smash”, then the people responsible for the movie get rewarded with continued career success even when the movie flops.
It doesn’t matter whether the formula has ever really worked, it just matters that they followed it, and therefore can’t be blamed for failing, because they followed the rules. The failure gets shrugged off, the producers get rewarded for trying a “proven” model, and we get ten years straight of disaster films, none of which do particularly well. While if you try something that doesn’t fit the formula and it fails – well, what the fuck were you thinking?? We had a formula!
The old “Nobody ever got fired for buying IBM” argument.
Re: the Tanzanian farmers, for all of their impressive cultural knowledge about farming, they didn’t really seem to understand why what they do works, beyond the fact that it works (at least that’s what I took from this post). For how wrong High Modernists were about farming and city design, the way in which they were wrong seemed to allow future central planners to isolate variables and fix those problems. I realize that this excuse would ring empty for the millions of peasants in the USSR that starved to death under these policies… but poor planning and humanitarian crises seem to be a broader theme in the history of the USSR and I wouldn’t exactly place their deaths on the shoulders of high modernism. Totally unrelated but it’s awesome that God hates censuses so much that She would kill 70k people. Also, I died a little bit inside when I read “libertarianism of the poor”.
+1
This reminded me very much of the pattern of the flaw of averages . Of COURSE there is no such thing as a System that will conform to the set of the averages of the few measures on which any
idiothuman will think to measure it! Even on those few measures, it would be infinitely Better to design the System to be tolerant enough to accomodate the full distribution of parameters, from floor to ceiling and optimal at 2SD from the average in both directions.The solution for States That Want To See is unsupervised ML. That is, things that grow the models from the data that is there, not just the data that an human, or, Elua forbid, a committee, thinks is sufficient. And even then, there will probably be forgotten parameters.
Brasilia does have at least one whimsical feature. Its street layout was made to resemble an airplane, which is what passed for an icon of the future in 1960. The siting of the Universidade de Brasilia allows it to claim, without fear of contradiction, to be always on the leading edge of the left wing. Also not a cliche because nothing is ever a cliche: the military police academy lies at the far end of the right wing.
Thanks for this. I’ve long had Scott on my to-read-eventually shelf and this was a welcome summary of what I might expect when I finally dive in.
You might also be interested in David Graeber’s The Utopia of Rules (my review here, with links to where you can find much of the book on-line: https://sniggle.net/TPL/index5.php?entry=29Jun15), which focuses in a similar way on the modern bureaucratic state & corporation.
It expands on Scott’s view (that standardization serves the need of the State to have visibility into its subjects, even at great cost) by saying that bureaucracy and standardization serve all of us (sort of) by permitting us to think less and to more-easily stumble through life without having to expend much intellectual effort in doing so.
1) I was under the impression that the replacement of peasant villages with larger farms led to prosperity, at least in England after the enclosure movement. The boundaries could now be used as farmland, and this eliminated the need for farmers to spend time traveling between their different stripes. Is there any source that gives crop yields before and after the enclosure movement for England in specific?
2) This article seems to come to the same conclusion as Thinking, Fast and Slow, in that intuition or metis works well when it has an opportunity to hone itself with direct results, as with subsistence farming; but that rationality and High Modernism work better with highly abstracted fields, such as stocks or, perhaps, X-risk. (I’m not entirely sure how instincts and metis would even develop with regard to X-risk, since it’s almost entirely theoretical field at this point as far as I can tell.) This also works well for morality regarding charity; EA provides a better model through rationalist-type reasoning because people lack direct experience with the results of charity.
However, it’s probable that if you could develop well-honed instincts about charity or X-risk, they would be better than High Modernist/rationalist approaches. Thus, if you ever can use well-honed instinct, then you should; and if you lack sufficient data and observable laws, then High Modernism should only be undertaken cautiously and reversibly, if at all.
3) On a relevant note, the observation about how top-down High Modernist reforms tend to fail until they don’t (e.g., with Soviet planning versus the Meiji Restoration) reminds me of Ozy’s post on revolutions as high-variance. Sometimes science-based approaches do really well and sometimes they don’t and there are too many factors to really figure out why/we don’t have sufficient factors.
If I could offer my own summary of this phenomenon, it might be something like “the farmer peasants were doing ‘science’ better than the ‘scientists’ ” in the sense that they:
1. Recognized there were more metrics by which one could measure “success”
2. Experimented with lots of different configurations, in an iterative process
3. Did not assume that facts and ideas from one domain transferred easily or perfectly to another
Eh. Not really.
It’s more like mimetic evolution. The explanatory power of the meme is far less important than the practical effect. More successful memes out-compete less successful, but that isn’t really a conscious process.
You can say insects or bacteria are engaging in science, even though they definitely have strategies that work against insecticides and antibiotics.
Perhaps the best antidote to modernist design is Christopher Alexander, who’s very Hayekian view of emergent order and the value of design tradition first started appearing more than 50 years ago. See “A City is Not a Tree,” and certainly “A Pattern Language.”
Totally second that. Christopher Alexander’s human architecture at all scales is the exact opposite of ‘rectangular grid’ approach. It recognizes the patterns that actually work. And Alexander makes an effort to quantify the degree of confidence in these patterns. And references lots of research quantifying some of the things. And by the way he invented the whole idea of pattern languages which is used in computer science so often.
http://www.goodreads.com/book/show/106728.The_Timeless_Way_of_Building (reviews)
https://www.patternlanguage.com/labyrinth/apl-tour1.html (members only access to the patterns)
http://library.uniteddiversity.coop/Ecological_Building/A_Pattern_Language.pdf (bookz)
And put together James Scott with Christopher Alexander, and you get Scott Alexander!
Metrics are important, and once you see that it’s tempting to elevate them to the only thing you care about. But if you do that, you fall into the trap of assuming the map IS the territory. The sin of statist seers is not that they tried to make plans, but that they completely ignored the actual facts on the ground at the time and also NEVER LEARNED. An important trait of OODA loops is that they’re loops. You observe, orient, decide, and act, but then you have to observe again. Even when the planners observed and oriented at first, once they decide and act they double and triple down on act act act, as if the only thing that could be going wrong with their plan is that people aren’t implementing it hard enough.
I think something useful to look at when thinking about this conundrum is to look at organizations that ACTUALLY control large amounts of people and resources, and what systems they use to do this efficiently. Walmart is famous for its logistic capabilities, and one of the ways it does this by enabling price signals and information about under and oversupplies of goods to be decentralized. Individual employees at individual stores are authorized to change how much of a good that store is going to order in the next shipment, and this means that Walmart has an up to the minute database of what is needed where and when, provided by the metis of the people in that métier. You can mitigate the problems of seeing like a state (or large organization) by improving how well it sees, as long as you don’t decide that it doesn’t need to “see” at all.
Legibilization is also not purely corrosive. Making things standardized and simplified is valuable to a lot of people (as one notices about weights and measures or USB cords for power adapters), but a useful adage is “Everything should be made as simple as possible, but no simpler.”
On Wal-mart, I’ve heard stories of overpriced obsolete electronics staying on the shelves forever (or until some ignorant consumer buys one), supposedly (I can’t actually find now the story where I read this rationale) because no-one is authorized to reduce the price or just throw them out.
Obviously this doesn’t go directly to your point, in that they’ve obviously stopped ordering those items, maybe even before anyone currently working at those locations started working there, but it’s an inefficiency whose explanation i recall being attributed to a lack of the autonomy you’re claiming as its strength.
EDIT: Found the article claiming they weren’t allowed to reduce the price:
Yup. This is a good expansion of what I noted above about “closing the control loop” and actually looking.
Remember — people tend to associate the elements of High Modernism together, or the elements of metis/traditionalism together; but there’s no reason these things need to go together as bundles, or that the solution needs to be one of the two. Beware reasoning by association; just because people tend naturally to associate certain things, doesn’t mean they actually need to go together if you’re careful.
But why did they NEVER LEARN? It’s not like this was a one-time failure; it happened to repeatedly around the world to many different people.
My impression from 25 years at IBM was that it was very very rare for
either the company or its high level managers to admit that it had made
a mistake – let alone to correct it. Perhaps this is a similar problem?
Perhaps this is a general problem:
When corporate or governmental leaders drive some mistaken
top-down choice, and they are too arrogant to admit they were
wrong, perhaps the mistake can’t be corrected?
Honestly, I’d invert the question. Never learning seems to be the human default. Why did some organizations successfully learn and how can we duplicate that? (Yeah, OK, that’s just the same question reframed, but…)
(As for why is never learning the default: Probably because learning, and admitting one is wrong, while good for getting results, is bad for winning political fights; and even when one does not deliberately prioritize politicking over doing the right thing, it’s… basically just something we’re evolved to do, with such things as self-deception and the arrogance that soreff describes.)
Because they had no reason to. They were insulated from the consequences of their actions: Le Corbusier died a famous and respected architect, Window Taxes were only repealed hundreds of years after they were first instituted, and when the soviets redistributed farmland it was never the commissars who planned it that starved. Chavez did not have to wait in bread lines, and neither does Maduro. Even in democracies, where in theory rulers are answerable to the will of the people, there are still very few consequences to making mistakes that don’t involve a criminal conviction. Rent Controls generally decrease the stock of housing and raise rents in a city, but they still remain a popular democratic measure, meaning politicians are incentivized to do the OPPOSITE of the public good.
Here’s a better one – we had sex roles and rules around the relations between the two sexes that developed over millennia and have been throwing those rules out over the past x number of years (with a massive acceleration in breaking the old institutions starting in the late 1960s) with the result that men and women no longer come together in stable lifelong partnerships and produce children. Rationalizing the rules of marriage and sexual relations is the Soviet collectivization of agriculture where instead of producing starvation it produces massively dysgenic fertility.
Obligatory XKCD. Punchline could have titled this essay.
And they’ve been torn down over a period of decades or more. And the environment in which those roles and rules developed didn’t look like the modern world: everyone lived in small, tight-knit communities so shame and shunning were effective, most people were farmers so extra children were an asset, infant mortality was high so extra children were a necessity, most jobs required strength so physical prowess mattered a lot, the world was violent so your family required literal defending a lot of the time, and so on, and so on.
See also, the third section here. “If you don’t like women’s lib, your enemy isn’t Gloria Steinem. Your enemy is the Vast Formless Thing controlling Gloria Steinem. In this case, that would be the demographic transition. You might be able to beat Gloria Steinem in a fight, but you can’t beat the demographic transition.”
(Also, the dysgenics thing has been happening, I’ve been hearing, for well over a century, and if anything, it’s slowed or stopped recently. I hope this makes you feel at least a bit better.)
Umm… I… what? Where do you get these weird strawmen? The least progressive proposal called progressive is post-office banking.
I wouldn’t say it’s really about self-help? I guess CFAR does some self-help stuff. I always thought the real point of “rationality” as a movement was not per se to persuade you about AI risk, but to give you a sufficiently naturalistic worldview that persuasion about AI risk and Friendly AI would come across as even remotely coherent.
This is seen as necessary because many people still actually believe in supernatural souls, climate (change or not change) being dictated by gods, p-zombies, non-natural moral properties, and other attempts to scream to the heavens, “Please, someone, anyone, let ontological naturalism somehow be wrong!”
Which then relates back to the AI risk: if you’re so sure about p-zombies that you think certain arrangements of computer parts cannot harm you no matter what, because they aren’t phenomenally conscious, then you’re very confused about a life-and-death issue precisely because you’re philosophically sophisticated.
Anyway, regarding the book, he sounds like David Graeber. I suspect I’ll like him for the same reasons, but also dislike him for the same reason: failure to focus on how we can anarchistically run a modern society with no central state apparatus, let alone fully automated gay space luxury communism.
If the only options are payday lenders and big banks, and you shut down payday lenders, you’re switching control to big banks.
The more Chestertonian option here would be to provide high-quality local micro-banking services through, say, post offices; if you get it right, payday lenders will either have to greatly improve their terms, or will go out of business, right?
Progressives have historically had an extremely cozy relationship with big business. FDR actively turned as many American industries into self-regulating cartels as he could. Most progressives hearken back to the economic model of the 1950s, with strong unions and pensions for all, which was mostly the artifact of an oligopolic economy. The general response to the S&L crisis of the 80s was to try to rip up all the small local banks because they were too hard to regulate, and a smaller number would be easier to properly monitor. They hate big business on a gut level, of course, but they seem to promote its interests pretty reliably in practice, and not just because of corrupt legislators. (The anti-TBTF response to the 2008 crisis may be a change in this pattern, to be fair)
Except that banks have gotten bigger since 2008, so probably not a change in the pattern. Laws such as Dodd-Frank may not not directly increase bank size, but extra regulations definitely do so indirectly.
That was the explicitly stated rationale. But there are plenty of people with a naturalistic worldviews who
don’t buy AI as existential threat. Big Yud tends to ascribe disagreement with his views to irrationality, and tends to have idisosyncratic views, so he perceives a lot more irrationality in the world than most people.
There may be people who disagree with AI risk because of supernaturalistic world views, but it doesn’t follow that those are the only people disagreeing.
The p-zombie thing is a case in point. There is no one who literally believes in p-zombies, in the sense
that they think their neighbour might be one …it is rather a way of exploring certain issues, like the paperclipper, which is also not meant entirely literally…but if you misunderstand people as beleiving in p-zombies, then the world will seem more irrational than it is.
The process being described here matches up pretty well with my understanding of what happened to the Roman Empire after the Crisis of the Third Century. The system set up by Augustus (the “Principate”) had the Empire ruling with a very light hand, relying heavily on local proxies in the provinces and on what was left of the Republic’s institutions in Italy. It worked pretty well for a couple hundred years before collapsing into a protracted period of civil wars (the Crisis of the Third Century), in part due to difficultly collecting taxes efficiently enough to keep various major power centers properly paid off (*).
When things got stitched back together by Diocletian, there was a radical institutional shift (the “Dominate”) where the Empire now ruled directly in the provinces through a substantial bureaucracy. The overarching theme of Diocletian’s reforms (and Constantine’s similar reforms a generation or two later) was to make the Empire more “legible”, enabling collecting more taxes and better recruiting and support of soldiers while breaking independent local power bases that needed to be kept happy separately from the bureaucracy and the military. Major features of the Dominate’s reforms, apart from direct taxation and direct bureaucratic Imperial rule, included: wage and price controls; an early form of serfdom to prevent peasants from moving away to avoid tax collectors; and vocational laws requiring the sons of bakers, armorers, entertainers, and minters to continue in their fathers’ professions.
(*) The Principate had been short on cash more often than not, but had kept on top of things through a combination of devaluing the coinage and periodically conquering and looting new territories. The Crisis hit when a plague and some big military setbacks hit at the roughly the same time, damaging the legitimacy of the Principate’s institutions (thus raising the price of paying everyone off to keep things together) and weakening the tax base at the same time, while also removing the option of conquering and looting a new territory. That left devaluing the coinage again, and you can only do that so quickly before people notice and you stop getting away with it. So power centers stopped getting paid off effectively and several of them looked for better deals by backing various rebels and pretenders. This had happened a few times during the Principate, but before the rebellions had either quickly won or quickly lost and things got back to normal, while this time things stayed unstable for an extended period (26 Emperors over the course of about 50 years, often with 2-3 rival “Roman Empires” at any given time).
The parts about cities organized in “an evenly-spaced rectangular grid” reminded me of what I learned about German cities in Imperial times. But that was usually blamed on a lack of governmental oversight, not overzealous city planning. And conditions sound a lot worse than, say, Brasilia. They didn’t build big apartment towers to make space for pretty parks, they build them to make space for more big apartment towers. Which is good if you own that land and want to make maximum profit per area but bad if you actually lived there. That just leaves me with the conclusion that profit-oriented companies can be just as bad as over-bearing governments.
Agreed, and it sounds like Scott would agree with this (see the section on farming in Montana). I think this also goes a long way toward addressing Scott’s fears about Moloch-style coordination failures: most of the real-world examples involve either lots of strangers (e.g. voters) or a few large, powerful actors (e.g. corporations). They wouldn’t happen in a small tight-knit community. Those groups are pretty close to “gardens” already.
Scott doesn’t bring this up, but it’s interesting reading this in the context of Biblical history. It would seem that whoever wrote the Bible was not a big fan of censuses. From 1 Chronicles 21:
It gets better. The Law of Moses (Leviticus 25:10) prescribes debt forgiveness every 50th year. So struggling peasants who needed capital couldn’t really sell their land, just mortgage it until the next jubilee. Now imagine you’re governor for a conqueror who knows nothing about Judaism and they annex Judea… but not Galilee. How do you rightly tax every village in this province?
“Hey, you, Jacob of Bethlehem! Who owns this land you’re working?”
“Er, my employer does until the next Jubilee, when the Law requires it to revert to Joseph the old carpenter of Nazareth or his heirs.”
“…”
(At this point you either already know or won’t be surprised to learn that Josephus records this census as causing a rebellion.)
I really wonder who would ever loan money under a jubilee-based system like that. I mean, sure, in year 1 you might. But in year 49, I wouldn’t give most people a dime.
I think the Pharisees came up with a system which enabled people to continue collecting debts after the jubilee year, but I can’t recall the details I’m afraid.
You are probably thinking of the prozbul which works around the rules of shmita (every 7 years) rather than the rules of jubilee (every 49/50).
If the rules of jubilee were ever followed (it isn’t clear) they certainly weren’t after the destruction of the First Temple, which was long before Pharisees existed. Shmita on the other hand continues in some fashion to be practiced to this day.
I cannot really see how debt forgiveness every 50 years would make it hard to sell land rather than mortgaging it. On the contrary, I would expect it to be harder and harder to find anyone willing to loan me something as the Jubille date approaches (as my debt would be wiped clean on the Jubille year, and the creditors would not get to see their money back). Could you please expand on your reasoning?
Separately from the loan forgiveness land also reverted to its original owner in the jubilee year. You just couldn’t sell land fee simple at all under that legal system (which may or may not have ever been in full use).
What happened to people who died with no heirs, or too many heirs?
I think you were supposed to keep going until you found an heir. The important thing was to keep the land within the tribe. See Numbers 36 for example.
Much much more legal detail appears in the Talmud. However, for those of us who aren’t believers, it’s important to distinguish between what’s in the Torah (and in various parts that were written at different times), what’s in the Talmud, and what actually was the practice in various period of Israelite history.
Selling your land fee simple was conceptualized as a debt and an injustice. This shows up in other ancient sources than Leviticus: Plato’s Laws goes on at some length about how to keep the citizens of a colony from losing their land, and in Livy debtors selling their land to creditors was one of only a few issues Roman tribunes ever raised.
How does this not create a permanent underclass of people who don’t own land – or don’t own enough land – because their great^x-grandparents had too many great^x-grandchildren and they cannot buy land?
How could people move if they can’t sell their land and buy land at their destination? Was it possible to trade land? Was there a maximum amount of land people can own, to prevent people whose cousins all didn’t have heirs from inheriting all the land?
According to the Torah, Israel started out by dividing the land more-or-less-equally between every family immediately after conquering it. Of course, ownership could still concentrate as some families had more or fewer children, but at least that’s a start.
In Rome, you had occasional land grants to veterans, so at least there was a way out.
Does anyone know whether jubilee years ever really happened?
I do not believe it was ever observed in practice.
The milder seventh year version was apparently real enough so that someone, probably Hillel, invented a legal form, prosbul, to create a debt that would not be canceled in the seventh year.
Doesn’t this suggest it was not observed, but was merely taken seriously enough to require people to carefully rationalize why not observing it did not violate their religious principles?
I appreciated your discussion at the end discussing the lack of overall lesson or clear message. But there is one – Motivation determines result – “A good tree produces good fruit.” In each negative case the state was acting out of selfish motivation – trying to impose their ideas on others rather than trying to learn from them and maybe offer them something. Why do educated people fall into this so often? Simple, they spend years of their lives studying things theoretically, but not in actual practice, which amounts to memorizing rules/words and their definitions. This is a miserable existence, but they suffer through it because they think it will pay off in the future. When they get into real life, they end up trying to rely on what they’ve already learned as a means of making their past count for something. Thats why everyone who does good in the world is famous for being a great learner – they’re not relying on their past to determine their present actions – they’re simply constantly learning and adapting. They value questions more than answers and they ask questions rather than preach.
I got the opposite impression. Yes, there was coercion, but in order to justify it they appealed to motivation. The Tanzanian government really did want to improve life for the poor peasant farmers. Le Corbusier really did want to design better cities that made people happier and more productive.
I think I see what you’re getting at, but I also think “motivation” is exactly the wrong way to express it.
Contributing, as ever, superficial corrections in lieu of any original thoughts on the subject of the post;
Minor typo: ” But in cases there are literally about rich people trying to dictate to the poorest of the poor how they should live their lives, maybe this becomes more useful.”
Probably meant to say “But in cases *that* are literally about rich people…”
Perhaps it was changed between that and “cases *where* there are literally
aboutrich people…”.Some thoughts, on reading this.
I never again want to hear about how everything was simple and straightforward until the modern state invented bureaucracy. Organically-grown local infrastructures are as impossibly complex as biology, it seems.
The past was really, really different from the present. The concept of measurement wasn’t even the same then as it is now, because they didn’t measure things for the same reasons as we do nowadays.
I’ve seen informal use of non-surnames to distinguish people with the same given name. I don’t remember my neighbors’ last names, but I do remember Polish Rob and Stoner Rob. It looks like a surprisingly good system.
That thing traditionalists worry about, where our organically-grown traditional wisdom is replaced with ivory-tower rectangular grids? It already happened! But at the same time, the passion for central planning and grand social engineering seems to have gone out the window. Extending the marriage franchise to same-sex couples looks seriously small-potatoes when compared to trying to make everyone live in featureless concrete cubes. And we don’t ban people from buying gasoline; we just heavily subsidize electric cars and try to distort the market in the direction we want.
It looks like the state has learned its lesson, at least around here.
but not compared to centrally planning everyone’s education, and thus most of their life, from the age of 5 to 25. Or deciding who should go to which doctors.
We don’t yet. Give them time.
How I wish this were true.
And the hundreds of thousands of pages of regulation that dictate decisions for businesses.
That seems… more like what it looks like to try and build your rational stuff atop an existing system rather than bulldozing and clean-sheeting it. (Compare the ACA to… I don’t know, a Sandersian one-size-fits-all zero-cost-sharing thingy.)
The labyrinthine complexity is an obvious failure mode of trying to conform your management to existing complexity. But it’s definitely not the same failure more you’d get from trying to conform the world to your management-from-first-principles.
I do think single-payer likely leads to more degrees of freedom lost compared to tightly regulating health insurance and care. However, they’re both central planning with different flavors and magnitudes. Whether bureaucrats and politicians manage production through mandating detailed rule books, subsidizing and protecting favored businesses, and putting barriers to entry against upstarts OR directly making production decisions for the economy, it seems like they’re central planning in some form in either case.
Centralization of education, if you can call it that, has been happening gradually over the last century-and-change. There are standards, but there’s still a great deal of local leeway, to the point where Common Core is a huge thing, despite (so far as I can tell) defining broad standards rather than specific central plans. (Of course, compliance with these standards requires an approved plan, so it may be more de facto centralized than it looks.)
But up to 25? Not everyone has to go to college! And if you think that these experiences are going to be the same everywhere, that you’ll have the same education and peri-educational experience at BJU, BYU, Berkeley and some random state engineering college, then I think you’re at least a bit misguided.
I can’t parse how this bears any significant resemblance to the US healthcare system, at least. (Maybe the VA?)
Maybe the Scandinavians have a yen for central planning like that, but given that the US can’t even raise the gas tax when it’s really necessary, I don’t think we’re going to be banning gas cars here, not unless they’re already obsoleted by other, more market-oriented policies.
@grendelkhan says:
I agree not everyone has to, or even should, go to college, but we’ve spent the last 60+ years pushing more and more people to go, and I see zero interest in stopping that trend
Medicare is a centrally planned price setting organization for healthcare. It’s almost at gosplan levels of bureaucratization. yes, I was being facetious about which people go to which doctors, but it is unquestionable that it (A) amounts to centralized allocation of medical care, and (B) we’re moving in the direction of more centralization, not less.
And 10 years ago, you’d have said the same thing about about adopting the state of Massachusetts’ healthcare plan federally. And I’d have agreed with you. that the US state is slow and inefficient, however, doesn’t mean it’s not moving in a specific direction.
“I never again want to hear about how everything was simple and straightforward until the modern state invented bureaucracy. Organically-grown local infrastructures are as impossibly complex as biology, it seems.”
This is true, but one thing in the book that didn’t make it into my review was that the evolved systems were surprisingly easy for the people involved. I think Scott uses the example of language – objectively very complex, but subjectively easy for native speakers to pick up. Compare to the tax code, which might be objectively simpler than English but which is harder to learn, especially if you’re uneducated.
“Scott often used the word “rationalism” to refer to the excesses of High Modernism, and I’ve deliberately kept it. What relevance does this have for the LW-Yudkowsky-Bayesian rationalist project? I think the similarities are more than semantic…”
I think the danger is for rationalism to operate in specific intellectual domains like the High Modernists do in various practical domains: rationalists know the *general* principles of rationality (and, sure, like Scott says, sometimes those do in fact help) but ignore the metis that comes from working in particular disciplines — the knowledge and experience of working on several sorts of problems. Sometimes the quirks and habits of a discipline will be as evolved to the specificity of that type of knowledge as the quirks and habits of a particular farming community will be to a specific terrain.
One of the things that bureaucracies are genuinely good for is making sure that essentially everyone does the nagging little chores where there’s a big payoff if and only if essentially everyone does them. It only takes one guy dumping his sewage upstream of everybody’s drinking water to ruin it for everybody, etc.
But one farmer who doesn’t like nice rectangular grids of monocultured cash crops, once the kinks have been worked out and that actually works, isn’t ruining it for anyone but himself. Unfortunately, “seeing like a state” usually means not seeing it that way. Seeing like a state means the same impulse that gives us “Someone is Wrong on the Internet!”, dialed to eleven and then applied in meatspace with police power.
And yet very relatedly they also excel at making (or at least encouraging, sometimes with varying degrees of coercion) people do nagging little chores that are either significantly costly or more or less pointless, like
recycling. Bureaucracies seem to have essentially no ability to distinguish genuine public goods from what is perceived as a public good by a politically influential (not necessarily majority) subset of the populace in a state of religious fervor / moral panic.
Also, the line between the types of interventions that get through bureaucratic filters seems pretty arbitrary. I’m thinking of flouride in the water (maybe has some marginal value for public dental health, but not as convincingly established as you’d think), versus lithium in the later (creepy as fuck, but AFAIK it’s at least possible, and probably quite likely, that the net effect on public mental health would be vastly greater).
that’s not quite right. they don’t give two figs about public opinion or the politically influential. What they care about is their sense of mission. Things they think of as their mission, they will pursue to the ends of the earth, budget permitting.
Of course, this axiom is limited in that it gives you no mechanism for predicting what an organization sense of mission is, or even if a given idea will fall inside it or not.
“Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”
C.S. Lewis
. What they care about is their sense of mission.
Originally perhaps.
That article on recycling was pretty sobering, especially after having lived in Yokohama which has ridiculously elaborate recycling rules.
On the other hand, that article seems to say the recycling some things (paper, cans) is useful, so I am not sure the argument was that recycling is completely worthless.
Also, it made me think of this
Oh god, throwing away garbage is such a fricking pain in Japan and definitely has the quality of a civic religion, if not actual religion: to not properly sort your garbage is to be vaguely immoral (though landfill space is obviously going to be scarcer there than in the US).
Meanwhile, the overgrown construction industry continues to dam up rivers causing no problem, built landslide protections into mountains no one lives near, and replace forests with cedar monocultures resulting in everyone having a cedar allergy.
Yeah, it’s pretty ridiculous. I read that there is even a town that has over 30 categories for recycling! Somebody in my apartment block kept putting the wrong things out on the wrong days, and I was worried everyone would blame me because I’m a foreigner. Fortunately, nothing came of it.
In the U.S., Thomas Jefferson came up with a high-tech system for delimiting real estate boundaries west of the Appalachians to replace the old “metes and bounds” system that kept land lawyers like Jefferson busy arguing over deeds with descriptions like “from the old oak tree to the rock that looks like King George.”
In the 1780s, Jefferson instead applied a rectangular system based on latitude and longitude to all the Western land in America. Then surveyors went out and measured it and put in milestones corresponding to the latitude and longitude. My lot in Los Angeles is part of the grid Jefferson conceptualized 230 years ago. You can see Jefferson’s grid out an airliner window as you fly over giant rectangular farm fields. It is, to use Scott’s term, extremely legibile.
And, this Enlightenment system seems to work pretty well. It allowed the U.S. government to sell land directly to small buys without a lot of costs for lawyers.
The U.S. has much greater equality of land ownership than Latin America, where the King of Spain tended to give vast hazy land grants to his conquistadors such as “from the sea to the mountains.” To this day, as the economist de Soto pointed out, Latin America is plagued by a lack of formal title among dwellers who have informal hereditary rights to small pieces of property on large land grants. They can’t mortgage their land and house because it officially still belongs to the conquistador’s descendant who is the big cheese in the neighborhood.
So, some of these Enlightenment reforms worked out pretty well.
Some not so well.
It seems pretty easy to distinguish the Enlightenment’s bright-line legal system– call it “descriptive legibility”– from the “prescriptive legibility” of High Modernism. Jefferson wasn’t telling people how to use the land, he was just providing clear units in which to do so. That still serves some of the government’s taxation goals as described by Seeing Like a State– e.g. the pints / liters issue– but it didn’t disrupt the existing local systems because, well, there weren’t any.
Which brings me to another difference between this case and many of the cases described in the review. Jefferson’s land system was the equivalent of Le Corbusier razing Paris. The reason it went so smoothly is because the land’s previous occupants had been killed and driven out wholesale. Trying to draw straight lines on a map without getting rid of the original population first doesn’t work so well. I hear that’s a big part of how we got the modern Middle East.
Just a note: Scott’s comments on Brasília sound really off the mark.
I grew up there, both in the suburbs and in the core, and while it is possible it was vacant sometime it seems implausible today. Brasilia is really expensive, so I guess a high vacancy would lead to lower prices, right? I guess most people do not live in Brasilia proper because it is not possible at all. Anyway, even the suburbs are rigidly organized. Even cities bordering the Federal District jurisdiction tend to be rigidly planned, so it seems inviable to say people live there because they want some kind of lively messy city.
Also, in the posted picture: ALL parts of this picture is Brasília. There are different administrative regions there, but it is not the “suburbs” Scott mentions by a large margin! The suburbs in general sits ~20 km far away from this region (which is, again, very expensive to live in.)
(There are a lot of people who do not like to live in the superquadras, for sure, but those are mostly from elsewhere (more or less as I think it happens in Washington D.C. as well) so they miss their homelands. For that, I recall some colleagues that hate, hate living in Madrid – and I would give a leg to live there!)
As an unrelated note: I also lived in Rio de Janeiro, exactly in Copacabana. Copacabana is indeed I “live neighborhood”, and it is great, but it is also considered quite decadent by locaks. The more uplifted neighborhoods are indeed Ipanema, Leblon and Lagoa, I guess, and they are, go figure, a bunch of rectangular blocks. Barra da Tijuca, a more suburban neighborhood for rich people, is known to look a lot like Brasília, even because Lucio Costa also worked on its plan. So yeah, Rio is quite “colorful” but the more demanded places to live are more planned.
I don’t think it will invalidate the book (that is more food for thought than a precise reference, I guess) but if it got so wrong on these points, how wrong can it be on the rest?
Thanks for your input.
Other than the direct quotes, Scott shouldn’t be held responsible for my failings (I used a different Brasilia picture than he did). That having been said, according to Scott’s schematic it looks like the Niemeyer-designed part really is the cross/angel shape, which in the picture is clearly the part in the center?
I’m not any kind of expert, but I think Niemeyer projected the buildings, while Lúcio Costa designed the basic urban plan. But yeah, the shape-that’s-supposed-to-be-a-cross-but-come-on-that’s-an-airplane-just-look-at-it was the initial center of the project.
For what it’s worth, it’s received wisdom around here (São Paulo, at least) that Brasília is a bad place to live and one wouldn’t want to move there except to work for the government, for basically the same reasons you describe.
But I agree with brandizzi in that the most expensive and desired places to live here also tend to be in the rare evenly-spaced rectangular grids amidst the chaos (great street corners, though).
That’s it, the cross/angel/ariplane like part is the so-called “Pilot Plan” of Brasília, the core idea that drove the planning, but the rest of the things in the picture are part of the same administrative area and is as heavily planned as the core. A lot of that, by the way, are regions reserved to specific areas, such as hospitals, hotels and police academies. The part in the opposite side of the pictured lake is the most expensive region of all, where politicians, lobbyists etc. tend to live and is rigidly defined as well. I would say that you can find more diversity in architecture and lifestyle in the superquadras than in the other parts!
It is important to have that in mind because it is easy to think Brasília is some kind of Stalinist hellhole. Well, there are some similarities, since it is a rigidly planned city in the middle of nowhere under a somewhat uncomfortable climate – although not even close to Siberia. But in general who lives there like it and this hypermodernism ended up as part of the culture. For example, there is a neighborhood (Cidade Estrutural) created by recyclable collectors that lived in a landfill. In many aspects it was a cesspool of misery and violence, where children had killed one another due to broken toys or videogame turns. And know what? Its streets form perfect rectangles! (Fortunately things are getting better there AFAIK).
That said, I’m glad to give some first-eye account about it but I don’t think it affects the review at all. In the end, a lot of similarly weak parts of the book are addressed as well.
I think there is an interesting aspect to city planning & architecture that is not mentioned here: The high modernists did not invent city planning, in fact city planing as a craft or profession was already quite developed before high modernism. The differences between high modernist city planning and older city planing are mostly discussed in the article, like that modernist city planning uses gigantic moster blocks for housing and has segregated zones for different activities, and is generally build with people driving cars in mind. Older city planing covers the whole area with smaller buildings that are used for housing with shops on the ground floor, and focuses on walkable cities where the streets open up to town squares now and then.
But the thing is, city planning was already more or less a science in the 19th century with literature how do build a city etc., and it was done in a centralized way by architects and city planners just like in the 20th century. The big question is why high modernist city planning could be so extremely bad. You can’t blame it on centralism and a top-level-perspective alone, because then city parts built in the 19th century should be terrible hellholes just like Brazilia. It is interesting to note that in Germany those 19th century city parts are the main area of gentrification, as more and more people recognize their quality.
I think the main reason why high modernism in architecture and city planning is so bad, is because there was a complete break of tradition. The high modernists threw all experience accumulated up to 1920 in the trash, in favour of their own concepts of minimalism, “clear lines”, “open space” or whatever they came up with. In the aesthetic aspect of architecture this is quite obvious, the minimalist boxes of high modernism look nothing anything that was before in European architecture. But in architectural engineering and city planning they did just the same. And, as stated in the review, they were very good at selling it at functional and optimal.
The funny thing is, in some cases this is an obviouslie. Take buildings with flat roofs for example. In areas where it rains quite a lot like middle Europe, water accumulates on the roof and you have to do quite a lot of work so the roof does not leak. In many cases, after 10 years or so the roof starts to leak anyhow. This could be easily solved by building sloped roofs, how it was always tradition in middle Europe, but modernist architects love to build flat roofs because they look more functional. And, I mean, it’s quite obvious that sloped roofs are an elegant solution for the problem, so whoever came of with flat roofs was probably not looking for an optimal solution, but wanted the functional look.
So I would claim that high modernist architecture and city planning consists mostly of people with bad ideas, good PR and sadly also a liking for crushing everything that was before them.
Tom Wolfe’s short 1981 book “From Bauhaus to Our House” is a funny look at the mindset of High Modernist architects.
https://www.amazon.com/Bauhaus-Our-House-Tom-Wolfe/dp/0312429142
The Paris that everybody loves was centrally planned and imposed social engineering of the 1850s.
It’s actually extremely repetitious: all six story buildings with mansard roofs. But there’s a lot of variation of the details.
I haven’t been to Paris, but I noticed the same thing when I was in Italy and Malta. The building styles are shockingly monotonous – everything is 3-4 stories tall with basically identical construction techniques and building materials, and nearly zero green space. I live in a pretty soulless North American suburb, and it’s far more varied and visually interesting, even if they do much better things with sculpture and other such micro-decoration.
…have you been following US politics over the last year or so?
Trumpistas aren’t underclass.
Yeah, my reaction overstates the facts, and Trumpism is kind of its own (terrible) thing. But it seems pretty obvious that there’s a close relationship between High Modernism and the values of 21st-century liberal elites. One possible interpretation of the 2016 election is that it exposed the gap between those values and the values of other US voters.
It’s not exactly the same kind of question as “what kind of farming works the best”, obviously, but the line between “what methods work best” and “what are we trying to achieve” is blurrier than most people seem to think.
Not all of them, but most think of themselves as such.
If you think Seeing Like a State is disorganized, you should read Two Cheers for Anarchism.
Better, read The Art of Not Being Governed,which is his other good book.
James Scott is sympathetic to anarchism, but I’m not sure he is really an anarchist.
He’s not. Two Cheers has in its opening a concession that the state is necessary.
If I was writing a book I don’t know if I’d want Scott to review it, because based on this review I now think I know everything there was to know in the book and more.
No, there’s lots of great stuff I didn’t get to put in. Especially if you like excruciatingly detailed histories of Tanzanian farming villages!
Tanzania has particularly poor soil. Next door Kenya has much better agricultural resources, even though you’d think at first impression that they were pretty similar.
A lot of Europeans did very well for themselves farming in Kenya and Rhodesia, but not in Tanzania.
That’s not entirely true; history is full of tales of little independent villages who get burned to the ground every couple of years by bands of random invaders. Centralized governments tend to keep those bands contained… of course, on the flip side, they also do tend to start major wars, so it’s not a win/win situation.
Speaking of which, IMO there’s good reason to believe that the Soviet famine (Holodomor) was not a result of poor economic planning, but rather a deliberately engineered attempted genocide. The objective was (as Scott said) to prevent independent farmers from becoming a cohesive political force, as well as to crush any remaining traces of Ukrainian independence.
Passages like these make me think that Scott doesn’t really understand what the book’s saying. The example of the native agriculturalists shows that High Modernist planners without local knowledge often fail even by their own criteria. But this is far from the only failure mode– in fact it’s possibly the least important.
Planners get their warm fuzzy feeling from a population that’s quiet, orderly, productive, clean, healthy, and otherwise manageable. Could it be– gasp!– that those in the population itself get their warm fuzzy feeling from other qualities in life? Could it be that the planners and the population have different values?
Obviously the planners’ incentives and the people’s are mis-aligned; Scott’s review touches on this briefly, in the purely economic form of tax collection. But I’d go even further and say that even planners acting “for the public good” will inevitably differ from their public on what “the public good” means. The only way out of this is to actually be integrated with the public– to “go native” in the most complete sense.
This might sound like a libertarian or anarchist formulation: “everyone should do what they want”. It isn’t. I’d describe it more as “communitarian” or small-r “republican” or, if you must, “distributist” (Scott definitely gets the G.K. Chesterton analogy right). The difference is that we really can have differences within the community that get resolved into a unified strategy (by consensus, majority rule, or the like) despite disagreement. There’s still a notion of the “public good”, but it comes from the public.
What this view rebels against is the idea of a planner who doesn’t represent that “public good”. That can happen either because the planner comes from outside the community and wields absolute authority (the case Scott usually seems to visualize), or because the planner represents only a small part of the community and wields disproportionate authority (the more common case in practice, and to some degree the current situation in the US).
James Scott’s objections seem to be right up Elinor Ostrom’s alley.
Also,
Neo institutionalists seem to have a lot to say about these topics. From a Coasean perspective the metis of local farmers provides the downward pressure on the optimal size of the farm. The High Modernist approach provides the upward pressure on the optimal size of a farm. Roughly speaking metis solves the knowledge problem, and modernism solves the economies of scale problem, and the difficulty lies in formulating a legal environment that is conducive to firms assuming their optimal size and scope.
Fishing rights work best when they are intensely localist. Regulating open ocean fishing has been very difficult, but Maine lobstermen do a good job of sustaining the supply of near-shore lobster. Just don’t decide to move to Maine and go into the lobster fishing business: you’ll probably find your boat 20 feet underwater tomorrow morning.
I know what you mean when you say you’re skeptical of people who argue that we should respect indigenous people’s knowledge or when they say that they have “different ways of knowing.” Describing it this way makes it sound magical or like there is more than one reality. On the other hand, I think there is something to be said for the idea that certain groups of people have accumulated a vast amount of locally-relevant knowledge through generations of living in a specific environment, and that this knowledge should not be simply dismissed as superstitious or irrational. I think it would be more convincing if people phrased it this way rather than say they have different ways of knowing or “ontologies.”
On the other hand, some aspects of “traditional” societies are superstitious, harmful, or maybe not the best way of doing things. Take the Dobu, for example. I also think science enables us to understand the natural world and sometimes suggests better ways of doing things. An image that has stuck with me is the final scene in the 1937 film version of The Good Earth. A horde of locusts are about to ravage Wang Lung’s field when his son, who has just returned from studying agricultural science, yells out “We don’t have to be subject to nature’s whim!” (or something like that) and helps the family build fire barriers to stave off the locusts. This seems like such a contrast to the now common romanticization of nature and traditional societies.
I also don’t mind living in grid cities as opposed to ugly, unplanned sprawls. As much as I relish variety in languages, there is something good about having a common language. Indonesia successfully adopted a national language without snuffing out the myriad other local languages. I also think the metric system is far preferable to a bewildering array of local systems of measurement, and I wish the US would adopt it. For one thing, standard ways of doing things allow for greater coordination among different peoples across larger expanses of space. Do you think scientific advancement would be possible if every village or county had its own measurements?
I think we have gotten to the modern world through a great deal of violence, disruption, and trauma. But I would rather live in my modern society than upland Southeast Asia. So I don’t know what the overall takeaway should be either. Remaking society from scratch and centrally planned economies are definitely bad. Maybe the best thing is some planning and a framework loose enough to allow for ad hoc decision-making and adaptation to local conditions.
It’s interesting that the metric system caught on but the famous French Revolution calendar did not.
Things like calendar reform and spelling reform were very big deals for a long time but they’ve pretty much died out over the course of my lifetime. I was reading an article about golf by steelman Andrew Carnegie awhile ago and it was written in a simplified spelling system that Carnegie insisted that all of his articles be published in.
It seems like nerdish energies in the past went into various reform plans, but those have largely died out.
That’s an interesting point about the calendar, but was it in any way an improvement on the old calendar or just cosmetic changes done for ideological reasons?
A number of people have proposed spelling reform over the years. Noah Webster succeeded to some extent, which is why American English has some different spellings. I remember reading somewhere that Ben Franklin was also interested in spelling reform and lamented that English pronunciation was becoming so divorced from spelling that in the future English words would be like Chinese ideograms!
I imagine there are certain periods where there is a lot more revolutionary energy than others, and so a lot more general openness to reforms.
The revolutionary calendar was based on tens, so it was obviously more rational and superior, citizen.
It also, IIRC, gave workers one day off every ten days instead of one every seven like under the old calendar. Possibly this had something to do with its lack of success…
Here’s a pretty nifty calendar reform proposed by a lady in Brooklyn, Elisabeth Achelis, in 1930:
https://en.wikipedia.org/wiki/World_Calendar
George Bernard Shaw, who was probably the single biggest high-brow celebrity in the English speaking world in the first few decades of the 20th Century, loved advocating various reform movements. It would be interesting to see which ones were adopted and which ones didn’t catch on.
Paul Johnson argues that the 20th Century figure who predicted the social changes of the late 1960s most accurately was Cyril Connolly, a friend/rival of Orwell and Waugh, who during WWII edited the top new highbrow magazine Horizon. At some point in the 1940s he published a list of changes he expected in the future and they were pretty much what happened around 1969 in terms of liberation movements.
Perhaps the trend toward Connolly’s type of reforms made less possible the old fashioned progressive reforms advocated by Shaw, which would require centralized power.
The last time the calendar changed, it took the Pope to get it done.
The time before that it took Julius Caesar.
And not surprisingly, it took a couple hundred years for Protestant countries to adopt the new calendar.
Are you referring to people like Eduardo Viveiros de Castro and Martin Holbraad with the reference to ontologies? I agree, that line of theory often leads in pretty ridiculous directions.
Is that where that term comes from? Like a lot of academic shibboleths, so many people use it as if it’s meaning is common knowledge, but I find it difficult to trace the originator of the term. Another such word is “imaginary” as a noun. I often wondered if these writers know what ontology means because they seem to be using it in a way that differs from how I understand it — i.e. the philosophical study of how we know things.
Epistomology is the study of how we know things. Ontology is about the things themselves, not our knowledge of them.
You’re right. Looking at my original post and follow-up post I see I wrote “ontology” both times but I actually meant to write epistemology. Doh! I think this caused a whole lot of confusion.
The term I have seen in more postmodernist works is also epistemology as in “different epistemologies.” They might use ontology in a weird way, too, but I’m not sure.
I believe Protagoras is right that epistemology is the study of the nature of knowledge whereas ontology is the study of the nature of being. I agree however that those who talk about “ontologies” tend to use it rather different than philosophers. The anthropologists who talk about multiple ontologies they seem to mean something like ” multiple modes or ways of being” rather than “multiple studies or discourses about being”. My favourite critique of the whole multiple ontologies shtick is David Graeber’s Radical Alterity is Just Another Way of Saying Reality. Where have you come across “ontologies”, I knew Hardt and Negri have used some of the ideas but it would be interesting to know how far it’s spread.
The meaning of “ontology” has further drifted thanks to its use by data specialists using it to refer to any data model. E.g. an ontology of bioinformatics; an ontology of bank transactions; etc. If you’re lucky, they’re at least referring to a formal model with a specification of what relationships between entities are considered valid. If you’re not, then they’re referring to a DTD from the 1980s with the label “ontology” slapped on over the top.
Oh, Scott. (Our lovely host, not the author.)
You got to talking about Brasilia, and soul-deadening cities planned so rationally no one wanted to live there, and I thought, “Ugh. Like Irvine.”
And then bam,
(also, I grew up in Irvine, the most planned of planned cities, and I loved it.)
I’m about 1/4th of the way through McMaster’s _Dereliction of Duty_. One of the factors that McMaster blames for the US’s failure in Vietnam was McNamara’s insistence on “scientific”, quantitative measurements of actual and proposed military strategies. This seems to me to be an instance of the same sort of modernism as SLAS complains about. McNamara was abetted in this mismanagement by a JCS chair named “Taylor” (nothing is a coincidence). McNamara’s notion was that by precisely calibrating the military activity that the US engaged in, a precise message could be sent to North Vietnam, which would respond in a predictable fashion. Needless to say, this did not work very well. I’m really curious to read McNamara’s book and see what he was thinking (and where he thinks he went wrong).
I never got around to reading McNamara’s book, but if you have a few hours free I strongly recommend The Fog of War: Eleven Lessons from the Life of Robert S. McNamara.
Similarly recommended, for similar reasons, are the writings of President Nixon’s Secretary of Defense Melvin Laird.
In particular, Laird’s 2005 article in Foreign Affairs, “Iraq: Learning the Lessons of Vietnam” presciently begins:
Today the Trump Administration (and the American people) are learning Laird’s lessons the hard way:
• creating a healthcare system is dicier than destroying one
• tearing down a wall is dicier than building one
• recreating an ecosystem is dicier than destroying one
• building trust is dicier than destroying trust
It’s unclear that the White House’s “Galt Wightes” have any realistic ability in these areas, any more that Johnson and Nixon had any realistic ability to win in Vietnam.
Duplicate comment, but, I’ll ask it again here: Why is your wall example inverted, compared to the others?
Answered below.
Interestingly, McNamara was also the guy who made the seatbelt popular, which is an area where science and data-crunching have done a ton of good.
At the level of “Senior Level Officer (Colonel – General)” the following works appear on the USMC Commandant’s Reading List:
• H. R. McMaster: Dereliction Of Duty
• Lawrence Freedman Strategy: A History
• Fred Charles Iklé: Every War Must End
These works train leaders to ask tough questions:
(1) After we destroy Iraq, what social infrastructure will arise in its place?
(2) After we destroy ObamaCare, what healthcare system will we create in its place?
(3) After we destroy our present biosphere, what biosphere will arise in its place?
(4) After we build a border-wall, how will that wall come down?
(5) After we destroy social trust, how will trust be recreated?
The CRL teaches USMC officers that its no bad thing to deprecate ideologies that lack credible answers to these tough questions.
That’s why it’s unsurprising that General James “Climate Change is Real” Mattis and General H. R. “Wiretapping Charges are Fantasies” McMaster are among the most ardently progressive voices in the White House.
These well-informed generals benefit from their profession’s dry-eyed broad-domain reading of history and science, not restricted to juvenile political and ideological fantasies.
Why is your wall example inverted, compared to the others?
In every case, the (socially, scientifically, ecologically, economically) ‘dicey’ part comes temporally after the (politically, ideologically) easy part. That this is a recipe for rash actions is the unifying theme that, in particular, Iklé considers in Every War Must End.
Sorry, I should be more explicit. Building a wall is substantially harder than knocking one down. The only hard part in knocking it down is getting the political will for it, not the actual doing of it. That’s opposite of the case of, say, rebuilding Iraq, which everybody wants to do but nobody really knows how to and where new problems are constantly springing up. By contrast, building a wall that large is a serious engineering challenge that could easily get bogged down in unexpected problems. If the examples are supposed to be “Once we do the straightforward thing, how do we get out of the resulting quagmire?”, I’m pretty sure that one really is backwards.
You have the ACA precisely backwards. Repealing it is politically difficult, not actually difficult. People were not dying in the streets prior to 2014. It was passing the ACA in the first place that is more in line with what you’re talking about.
Claim (Sniffnoy): “Building a wall is substantially harder than knocking one down.”
Simply wrong. For example, knocking down the Berlin Wall required the destruction of an entire totalitarian regime.
======
Claim (cassander): “People were not dying in the streets prior to 2014.”
Simply wrong. Consider for example, undertreated chronic medical illnesses in schizophrenics. These tough medical realities illuminate the moral consequences of further planned healthcare degradation; degradation that physicians and patients alike vehemently oppose.
That’s not the sort of difficulty that is relevant to what you were originally talking about. Please keep the context in mind. Indeed, I’ve already essentially addressed this point in my comment above; you’re basically ignoring the actual content of my reply in your response.
Apologies, but I really can’t help but get the impression you’re going for rhetoric/word-games here rather than actual argument. I can be more explicit yet if you think I failed to make my point clear, but for now I think I will skip it.
History teaches that the inescapable reality of “tear down that wall” makes for mighty tough policy … just ask the East Germans, or Israel’s Apart-Right.
That is why — as Iklé’s Every War Must End, McMaster’s Dereliction Of Duty, and Freedman’s Strategy: A History all discuss (per the OC and the CRL) — a policy of “Perish the thought” is mighty bad practice, both at the White House and in SSC.
Very regrettably, in both the White House and SSC, there’s a cadre of ideologues for whom “Perish the thought” is a primary policy objective.
Not everyone agrees with a political agenda of “Perish the thought” … particularly historians, scientists, engineers, mathematicians, writers, poets, artists, musicians, naturalists, rights activists, and physicians (etc.).
This pro-rationality cohort includes too, enlightened psychotherapists like Jonathan Shay, Eric Kandel, and Marsha Linehan.
It’s OK to say these things here on SSC, right? And it’s OK too, to critique works like (James) Scott’s Seeing Like A State in that light, isn’t that right?
Ronald Reagan was right. Reagan’s dictum “Tear down that wall” expresses a principle that is now, and always has been, at the heart of radical progressive Enlightenment.
Folks who hide behind walls prefer to build them higher. Their fear and anger need not rule us (fortunately).
@Eva Candle
You are correct, let me amend my statement. People were not dying in the streets in any greater numbers than they are today in 2014. The ACA was passed very much in the spirit of “we must do something, this is something, therefore we must do this.” It’s a train wreck that spends an enormous amount of money for very little measurable good, and which very well might collapse under its own inadequacies. It was asked without asking any of the tough questioning you’re arguing for.
Eva Candle distinctly reminds me of John Sidles’ posting style…
Edit: beaten to it by Nornagest and Dr Dealgood.
The comparison between a border wall and the Berlin wall is completely asinine because it’s inverted.
The Berlin Wall wasn’t built to stop West Berliners from moving to the East and enjoying the higher trust more prosperous society there (while undermining it with their presence). It wasn’t “hiding behind a wall” – it was “imprisoning your subjects”. Border walls and city walls meanwhile existed in literally every era in almost all cities for the exact same reason – to keep out invaders.
You write this as if ObamaCare wasn’t the disruption.
The cynical (and correct) view of ObamaCare is that it was designed to take advantage of the fact that it’s easier to destroy a complex evolved system than to recreate it. It was an unstable particle that was supposed to undergo nuclear decay into single payer health care.
Funny but those are interrelated – social trust has been destroyed specifically because of the impact of importing tens of millions of low trust aliens who then give birth to more low social trust adapted and encultured descendants.
I’m pretty sure that James Scott doesn’t think of the societies he’s talking about as being made up of “many actors competing with each other”. Anthropologists, like Scott, generally aren’t that keen on reducing the complexity of the social world to a model of individuals competing with each other (although there are exceptions), and Scott quite explicitly argues that capitalism involves a similar simplification to the state-run projects he’s mostly concerned with. Often the two come hand in hand, such as with the enclosure movement in England.
I agree. I’m not sure why Scot (our host) smuggles in the concept of competition because although I haven’t read Seeing Like a State, I would be surprised if Scott (gosh this is confusing!) emphasized competition as a key factor. In Scott’s other book “Two Cheers for Anarchy,” he writes: “One thing that heaves into view, I believe, is what Pierre-Joseph Proudhon had in mind when he first used the term ‘anarchism,’ namely, mutuality, cooperation without hierarchy or state rule. Another is the anarchist tolerance for confusion and improvisation that accompanies social learning, and confidence in spontaneous cooperation and reciprocity.” There’s mention of competition.
I think there are a few things that pair up well with this view.
1. Optimizing for the average- Everything has variance, when there are 20 different variables of interest, most people will be exceptional in some way. Far too often society designs in an inflexible manner and it punishes everyone for being exceptional in some way. For instance, your experience with patients reacting differently to medication.
2. Forgetting that we live in a Level 4+ world . Models are incomplete and in accurate. Failing to account for this can lead to huge issues.
3. The importance of correlation. There are huge monocultures that may be better on average but carry huge risks because if something goes wrong it goes really wrong. The world is more interconnected. See Toxoplasma/Social Justice spread. See computer hacks that can affect millions compared to physical hacks that can’t
4. Lack of exploration- Everything becomes what we think is best so we don’t learn too much about what is best. Neoliberalism rules the world. In the first world everything looks pretty similar. There is no Archipelago where city-states are innovating all the time, just a bunch of boring Neoliberalism. One issue with academia is that there is only one of it if we get one fundamental study wrong then everyone can be wasting time for a years. Might it be better if especially in soft sciences we isolated groups for years and then reconciled their findings every few years. There is a tragedy of the commons here, one wants someone to experiment but not if it affects them. See The Complacent Class by Tyler Cowan.
5. Requirements/regulation creep- initially things are small. Initially a car was just an engine, some wheels, and a seat. Now it’s got an entertainment system, safety systems, environmental controls. These are useful things but it becomes harder and harder to innovate because any change affects so many things. It’s hard to win by just making a better engine because a car is so much more than that.
5a. Collapsing things is hard- Because real systems are so complex it is hard to collapse them down to size without missing something. Utilitarianism has this problem. Morals is more than just a value function. It is also requires a degree of obligation. Utilitarianism seems crazy at face value because it demands optimal behavior. This can be fixed by kinda adding a degree of obligation back but this is a needed feature.
6. Safety in layers- Far too often there is one layer to rule them all and if it break then everything is terrible. In the legal system if you have a technical loophole, someone will probably find it and maybe exploit it a bit and you can take them to court and the court will smooth over the loophole if it isn’t intended. In Ethereum automated legal system, if you make a mistake then you lose everything (well until the entire ecosystem undoes it).
7. Testing every layer- the issue with layers is that each layer can only be trusted to the extent that it is tested. This is why Netflix has the Chaos-Monkey to create failures to test the layers that deal with layers below them failing. What really scares me is that many rather important systems are not tested and can’t really be trusted to work. Thinking things like constitutions, checks on (executive) power, countries leaving the EU, regions leaving the UK. The best side effect we can hope for from Brexit, likely Scottish independence, and election of a likely authoritarian US president is that these fragile systems are tested and break a bit and are made more robust.
I think there are a few things that pair up well with this view.
1. Optimizing for the average- Everything has variance, when there are 20 different variables of interest, most people will be exceptional in some way. Far too often society designs in an inflexible manner and it punishes everyone for being exceptional in some way. For instance, your experience with patients reacting differently to medication.
2. Forgetting that we live in a Level 4+ world . Models are incomplete and in accurate. Failing to account for this can lead to huge issues.
3. The importance of correlation. There are huge monocultures that may be better on average but carry huge risks because if something goes wrong it goes really wrong. The world is more interconnected. See Toxoplasma/Social Justice spread. See computer hacks that can affect millions compared to physical hacks that can’t
4. Lack of exploration- Everything becomes what we think is best so we don’t learn too much about what is best. Neoliberalism rules the world. In the first world everything looks pretty similar. There is no Archipelago where city-states are innovating all the time, just a bunch of boring Neoliberalism. One issue with academia is that there is only one of it if we get one fundamental study wrong then everyone can be wasting time for a years. Might it be better if especially in soft sciences we isolated groups for years and then reconciled their findings every few years. There is a tragedy of the commons here, one wants someone to experiment but not if it affects them. See The Complacent Class by Tyler Cowan.
5. Requirements/regulation creep- initially things are small. Initially a car was just an engine, some wheels, and a seat. Now it’s got an entertainment system, safety systems, environmental controls. These are useful things but it becomes harder and harder to innovate because any change affects so many things. It’s hard to win by just making a better engine because a car is so much more than that.
5a. Collapsing things is hard- Because real systems are so complex it is hard to collapse them down to size without missing something. Utilitarianism has this problem. Morals is more than just a value function. It is also requires a degree of obligation. Utilitarianism seems crazy at face value because it demands optimal behavior. This can be fixed by kinda adding a degree of obligation back but this is a needed feature.
6. Safety in layers- Far too often there is one layer to rule them all and if it break then everything is terrible. In the legal system if you have a technical loophole, someone will probably find it and maybe exploit it a bit and you can take them to court and the court will smooth over the loophole if it isn’t intended. In Ethereum automated legal system, if you make a mistake then you lose everything (well until the entire ecosystem undoes it).
6. Safety in layers- Far too often there is one layer to rule them all and if it break then everything is terrible. In the legal system if you have a technical loophole, someone will probably find it and maybe exploit it a bit and you can take them to court and the court will smooth over the loophole if it isn’t intended. In Ethereum automated legal system, if you make a mistake then you lose everything (well until the entire ecosystem undoes it).
This post contains some traditional bullshit about effectiveness of “old traditional methods” of farming in Rusian Empire in comparison with Soviet “kolkhoz system”.
Actually, Soviet collective farms yelded more tradable and taxable crops than barely subsistent peasant communes under ancien regime
Major agricultural market suppliers in Tsarist Russia were huge latifundii and rich peasant (Kulaks) households, but (un)fortunately first were destroyed by Revolution, while latter impoverished by ravages of Great War, Civil War and class strugle in rural Russia.
Is “class struggle” really the right euphemism for the central state deciding to scapegoat, pilfer from, and generally annihilate the kulaks?
Especially since “kulak” was a label applied randomly? One “kulak” deported to Siberia remember the official responsible saying basically, they ordered us to find N kulaks, and we choose you.
Hey now, it wasn’t entirely random. I mean come on, you need some sort of catch-all term for the for the ethnic Ukrainians and Slovaks who were less than enthusiastic about this whole “revolution” thing.
“So it does seem unfair for the newspapers to flash ‘Borgia Confesses!’ and ‘Borgia Burns!’ whenever a feminine mass poisoner has told all or has paid the penalty for her crimes. And they don’t mean Rodrigo and Cesare. They mean Lucrezia. But just try to convince any acquaintance chosen at random that Lucrezia was all right. He’ll only inquire, ‘Then what about all those funerals?’ There must be an answer to that if one could think of it.” — Will Cuppy, The Decline and Fall of Practically Everybody
“Coffee does nothing to me! I can drink all the coffee I want, and I’ll still sleep the same as ever!” – something like 500 caffeine addicts talking to me, who would later admit that they all sleep poorly and wake up feeling unrested (I’m a caffeine addict too, but at least I admit adenosine blockers gonna block.)
I have never had sleep problems related to coffee, even when I drank a couple pots a day. Went to sleep fine, woke up feeling rested. What persuaded me to cut back was the nauseously awful headaches I’d get when I went without for a day or two for whatever reason.
Incidentally, Scott, Metis is the reason why I couldn’t accept your parody of Zeus as a simpleton in your story the other day. Metis is a trickster-goddess, and Zeus has literally out-tricked Trickery and absorbed her into himself—making the cunning principle part of himself, and earning the title of poly-metis, multi-tricksy. In fact, the whole thing was how he could trick the very Fates—Metis, his wife, was pregnant, and because Zeus had risen against his own father, he was destined to have his son depose him (which is exactly what happened with his father Cronus (not to be confused with Chronus) and his father Ouranos). By eating (well, drinking (long story)) his pregnant wife, Zeus broke the chain and reigned forever. It was only because of that fertile seed inside him that he was able to give birth to Athena—under Olympian genetics, a goddess of wisdom can only be born of something that already has the principle in itself, in the same way that e.g. Phobos (fear), Thanatos (death) or Eris (discord) are all born from Nyx (the Starless Black Night), or how that one terrible goddess of insanity and violence, Aphrodite, is born of a divine penis, sperm and blood. Athena is only full of metis because Zeus also is (literally).
Prometheus’ story is a battle of wits, but it’s intended to show that, no matter how much foresight, ultimately you can’t out-trick Zeus. If the gods are just an anthropomorphic way of discussing abstract concepts, then Zeus stands for nothing else than the laws of physics. Zeus is The Way Things Just, Like, Are, Man. You can run and cheat and bluff but ultimately nature will catch up with you—Zeus is Nomos, the Law, Panoptos, All-Seeing (cf. similar characterization of the archetypal Indo-European Father-Judge-King mountain-lightning figure in other cultures, including obviously Yaweh.)
Cool!
A lot of Dutch surnames have an air of the surreal to them. Niemand, for example, means “no one”. (Speaking of metis, shades of Odysseus!) There’s also:
The list goes on and on. Some sort of explanation would seem to be in order.
The story I heard is that the Dutch were forced by Napoleon in 1811 to give up their patronyms and register proper surnames because legibility. Many of them thought “oh, this will all blow over in a few years” and signed themselves up for absurd names in protest. But it did not all blow over in a few years, and so here we are.
I’ve heard elsewhere that the protest story is actually apocryphal, and that some of the more scandalizing names have prosaic origins. (Naaktgeboren, for example, is apparently a corruption of the German term for a child born to a widow. Fair enough.) But there are so many weird Dutch surnames that it’s hard to imagine innocent explanations for all of them.
Now I’m wondering if this is a common occurrence among peoples forced to adopt surnames, or simply the Dutch gonna Dutch.
When Henry VIII forced the Welsh to adopt surnames, most people just used their patronymics. That’s why there are so many Welsh people with surnames like Davies, Williams, Jones, etc.
There are other books in this sort of sub-genre, but this is the foundational one.
It’s worth pointing out that Scott is not a libertarian in the American sense. He’s a left-wing anarchist, but doesn’t even resort to citing them as counter-examples to Lenin when there are Marxists like Rosa Luxembourg & Alexandra Kollontai he can praise instead.
Daron Acemoglu seemed to me someone implicitly arguing against Scott, since he’s written that the spread of the French revolution’s modernist institution produced measurable improvements over their feudal predecessors. His work with Robinson, Why Nations Fail, seems to go whole hog for “state building” and the rationalization that is part of it.
I also thought of Robyn Dawes as a counter to Scott, since he was arguing that individuals relying on intuition err all the time because their misunderstanding of rational probability calculations means they don’t know how to generalize from evidence. Of course, most of Dawes’ examples of poor thinking in “Rational Choice in an Uncertain World” were from the supposed experts who constituted his colleagues in psychology/psychiatry.
Scott’s The Art of Not Being Governed is an interesting book on a related topic which is more focused on his research in southeast asia, but even there I think he’s wrong about ethnic identities just being made up to confuse authorities. Improved genomic technologies seem to indicate there is shared descent among people who claim to belong to the same ethnic groups.
21st Century genomic analyses have generally not been kind to postmodernist theories.
I’m not sure it’s possible to stretch the definition of postmodernism so that includes James Scott. I think postmodern is starting to be used to mean “academic work in the humanities that I don’t like”.
Has there been much genomic analysis of people in upland Southeast Asia? I think one of Scott’s main points in that book was that previously, those people in the uplands were thought of a blast from the past that provide a glimpse of pre-agricultural hunter gatherers; however, far from living in some pristine condition, there present condition is the result of constant interaction with state societies in the valleys. He argues, for example, that many of the people in the uplands are descended from people that fled the valley states long ago and so are not hold overs from a distant past.
Far from being postmodern, I found Scott refreshing to read by comparison to other books I had to read because he presents a coherent theory using straightforward language and arguments (I’m not saying he is correct, though).
I don’t think the concept of ethnogenesis is postmodern, nor does it contradict genetic studies — it’s just about how groups define themselves. For example, yesterday I was reading that people in Turkey are descended from the inhabitants that lived there before the arrival of Turkic invaders — yet they nevertheless think of themselves as “Turks.” Similarly, Celts living in England began thinking of themselves as “Anglo-Saxon” within a generation or two of the arrival of Germanic peoples.
The discussion of the difficulty of getting tax revenue out of peasants reminds me of discussions of cost disease and the difficulty of addressing it. Most money/resources end up devoted to the least legible sectors of the economy.
I think there are a few things that pair up well with this view.
1. Optimizing for the average- Everything has variance, when there are 20 different variables of interest, most people will be exceptional in some way. Far too often society designs in an inflexible manner and it punishes everyone for being exceptional in some way. For instance, your experience with patients reacting differently to medication.
2. Forgetting that we live in a Level 4+ world . Models are incomplete and in accurate. Failing to account for this can lead to huge issues.
3. The importance of correlation. There are huge monocultures that may be better on average but carry huge risks because if something goes wrong it goes really wrong. The world is more interconnected. See Toxoplasma/Social Justice spread. See computer hacks that can affect millions compared to physical hacks that can’t
4. Lack of exploration- Everything becomes what we think is best so we don’t learn too much about what is best. Neoliberalism rules the world. In the first world everything looks pretty similar. There is no Archipelago where city-states are innovating all the time, just a bunch of boring Neoliberalism. One issue with academia is that there is only one of it if we get one fundamental study wrong then everyone can be wasting time for a years. Might it be better if especially in soft sciences we isolated groups for years and then reconciled their findings every few years. There is a tragedy of the commons here, one wants someone to experiment but not if it affects them. See The Complacent Class by Tyler Cowan.
5. Requirements/regulation creep- initially things are small. Initially a car was just an engine, some wheels, and a seat. Now it’s got an entertainment system, safety systems, environmental controls. These are useful things but it becomes harder and harder to innovate because any change affects so many things. It’s hard to win by just making a better engine because a car is so much more than that.
5a. Collapsing things is hard- Because real systems are so complex it is hard to collapse them down to size without missing something. Utilitarianism has this problem. Morals is more than just a value function. It is also requires a degree of obligation. Utilitarianism seems crazy at face value because it demands optimal behavior. This can be fixed by kinda adding a degree of obligation back but this is a needed feature.
6. Safety in layers- Far too often there is one layer to rule them all and if it breaks then everything is terrible. In the legal system if you have a technical loophole, someone will probably find it and maybe exploit it a bit and you can take them to court and the court will smooth over the loophole if it isn’t intended. In Ethereum automated legal system, if you make a mistake then you lose everything (well until the entire ecosystem undoes it).
7. Testing every layer- the issue with layers is that each layer can only be trusted to the extent that it is tested. This is why Netflix has the Chaos-Monkey to create failures to test the layers that deal with layers below them failing. What really scares me is that many rather important systems are not tested and can’t really be trusted to work. Thinking things like constitutions, checks on (executive) power, countries leaving the EU, regions leaving the UK. The best side effect we can hope for from Brexit, likely Scottish independence, and election of a likely authoritarian US president is that these fragile systems are tested and break a bit and are made more robust.
8. Measurements and policy are not independent of reality- measuring something and making it a goal will change how people behave and usually corrode the value of the measure (Campbell’s Law) and you should account for this (Lucas’ Critique). This has been exemplified by Volkswagen’s emissions scandal and workers at Well’s Fargo opening unneeded accounts for customers. Most used metrics have metis and mitigate this issue but I am worried that AI doesn’t accounts for this and may find a “better” metric only to have it completely destroyed by Cambell’s law when implemented.
To some extent this happens due to varying schools of thought. Most of my early career in economics was in the Public Choice Center at VPI. The dominant figure was Jim Buchanan who, along with Ronald Coase, had been deliberately pushed out of U VA. Buchanan went to VPI, Coase to Chicgo. In both cases the heterodox work they did eventually got them Nobel prizes and largely influenced the field.
A better known example in Economics would be Chicago vs Harvard and MIT.
I have never posted a comment here before but was moved to by the paragraph beginning:
No, most of them did not make sense–not even at the time.
Having wide roads to permit faster vehicle traffic destroys walkability and makes everyone dependent on private automobiles or centrally-planned mass transit systems.
Rectangular grids mostly make a city easier to navigate for vehicles, not people.
Decoration is important, if only for the utilitarian purpose of making people care about their buildings and want to maintain, preserve, and upgrade them.
Keeping pedestrians off the streets destroys city life and again makes people totally dependent on technology or central authority for their transportation needs.
Huge brutalist apartment buildings located miles and miles from any actual wild-ish nature isolate people from the natural world (including adequate sunlight!), driving these periodic “back to nature” movements.
And without local touches, local culture dies, only to be replaced with depressing, and soulless materialism, communism, or whatever other “ism” the elites have chosen.
And so on. Most of those ideas are just awful, and should have sounded awful at the time, too. They certainly should today.
It is dangerous for technocratic people like me and Scott A. and most of this blog’s readership to assume that we have all the answers, or that because we tend to prefer neat and tidy well-executed central solutions (a plurality of us are in tech, after all), that these are necessarily the best for everyone. We are all someone else to someone else, and when they try to tell us we’re doing everyting wrong, we usually don’t react very well, do we?
“Walkability” is simply the fashion of choice for the current set of technocrats. It’s not wide roads which destroy walkability; it’s scale. Manhattan in 1900 — a city of 1.8 million people on a roughly 20-square-mile island (excluding the relatively unpopulated northern peninsula) simply is not walkable as a whole.
Now the fashion is to build “New Urban” places which claim walkability and all that, but they’re mostly cargo-cult copies of things which didn’t work the way the New Urbanists think they did, and even if they had, wouldn’t work that way now.
I don’t know, I think grids make navigation easier for pedestrians, too (but I’m a straight lines kind of guy). I really like the city of Sapporo with its grid structure emanating from a tower in the city’s center. Every street corner tells you which quadrant you are in and how far from the center you are. It’s far easier to get around there than Tokyo.
On the other hand, I agree with you in that I also dislike places that are designed only for car traffic. One thing I dislike about the suburbs of my hometown is that they seem like nothing but a bunch of isolated habitation and commerce pods connected by cars. They are not very pleasant or aesthetically pleasing and they have no individual character. If you actually do try walking outside, you end up looking very strange.
My general take on trying to replace evolved, organic systems with logic-based systems is that it’s kind of like replacing your current ability to walk with four keyboard keys, as Scott recently linked. In theory, you should be able to run faster by controlling the twitches of all your muscle fibers. In practice, you have to leave the vast majority of such activity to lower-level systems. Trying to replace all of them without, in effect, becoming a Tanzanian farmer yourself, is really hard.
Which is not to say we can’t design a robot that walks and runs with a more adaptive gait than most humans; it’s just way, way harder than it seems, and won’t be accomplished by voting for committee members.
Arguably the only way such feats of rational engineering get accomplished is through allowing the organic, non-coercive evolution of society, given that we don’t have any aliens smart enough to be our lab coat-wearing “Tanzanian human farmers.”
I’m interested also in the question of why the failed bureaucrats seem never to lose their jobs. Related is how politicians and pundits who make wildly inaccurate promises and predictions are rarely drummed out of public life in disgrace. If you argue repeatedly that the Iraq War will be a huge success and turn out to be super, super wrong, for example, you seem fairly unlikely to suffer professional setbacks. Utter something which might possibly be construed as racist at a time when you think no one is recording, on the other hand…
My best theory is that once you occupy a prominent position in the public eye, the public tends to remember the “prominent” part and forget the specifics of what you actually do and say (of course, it is also not unreasonable that people who are constantly publishing their opinions for years on end should get something of a pass for being terribly wrong now and again, assuming they have also contributed something good at some point).
Sort of relates to the issue of “who are we going to get to regulate the x industry?” Why, people with lots of experience in x industry, of course! But aren’t they the ones responsible for all the horrible things we’re trying to regulate away? Yeah, maybe, but who else is qualified?
Also though of AntiFragile when reading this
Highly recommend reading, was one of the most interesting books I have read in years
There’s a nontrivial similarity between ~100 years of real AI research and “centuries of tanzanian peasants doing x-risk research” in the sense that the people building robots and implementing AI algorithms intended for real tasks are a distinct community from the hardcore MIRI crowd (and similarly distinct from the tech industry businessmen who talk about AI-related x-risk in public). It’s not as absolute as Scott’s examples: a greater percentage of MIRI members know how to code & have non-trivial familiarity with AI than western agricultural scientists were n-th generation Tanzanian farmers — but most of the discussion around AI risk by philosophers is framed in terms that stopped making sense around the time that PROLOG lost popularity (like paperclip maximizers — a blind optimizer with huge functional range would never be implemented or put in charge of anything not because it’s going to turn the earth into a paperclip but because minor flaws in its simulation mean that its plans are pretty likely to fail horribly at scale & wouldn’t be implemented without serious human oversight), even when they aren’t quite as bad as the mostly-pop-culture-centered public discourse that focuses on ideas entirely disconnected from reality (Asimov’s laws, can-robots-have-emotions, is-robot-art-real-art, what-if-they-hate-us-and-nuke-us).
One *huge* disconnect is one that Charlie Stross famously outlines: why make a human-equivalent AI when it’s cheaper just to have sex and raise a kid (in terms of resources [the human brain weighs a couple pounds and runs on sugar], development time [about 20 years, so less than the time period between the invention of back propagation and the invention of the RNN], existing infrastructure [huge government funding exists for the programming of human brains — school — as well as subsidies for supporting humans that are not yet sufficiently debugged for production], and lifespan [ever had a computer continue to function for eighty years? ever even driven an eighty year old car?]). The only economic reason to attempt to develop AI workers of human-like intelligence is legibility (and it looks like that legibility will be and illusion for the same reason that self-reporting of mental states is of dubious value: storing meta-information about trains of thought is expensive, while creating erroneous reconstructions when necessary is cheap).
Reading the survey results dashed my hopes that logging in system would go away, and I would be able to post free as a sparrow again. Alas, alas.
Water under the bridge now (or air past the wing?); I’ve caved.
I thought the obvious connection you could have made between metis and modernization in the world today is the “other crop” – human minds. There is still a large movement in America which opposes Modernization in Education, and even regrets things like compulsory schooling which began in New York in the 1820s. The Unschoolers, deschoolers, and homeschoolers all resist implicitly or explicitly the “Schooling Industrial Complex” in exactly the same way that peasants seemed to passively or actively resist the French centralized government – not always out of ideology, but sometimes out the simple human desire to take care of one’s own affairs.
In fact, this is part of the reason Common Core has been defunded. What was originally a plan crafted with much input from all sectors of the Schooling Complex was destroyed by politicization, feelings of local loss of control, and heavy resistance to the damn new math books (all our love to the textbook industry, you guys are the greatest). It’s probably not a coincidence that Red States which also have a >% of homeschool families, Fedhaters, Religious Schools, and “folk ideas” about education were the primary destroyers of the program. But do not forget, Common Core was defunded on a bipartisan basis, and boatloads of “folk” wound up hating it, especially math teachers, and teachers which felt they were losing control of their classroom to the caprice and stress of standardized tests.
Schooling is also the area where SSC Scott has proposed openness to letting the kids homeschool/unschool until 6th grade and live off a UBI until then.
Although, I’m too much of a “modernist” to homeschool myself, many of my friends, the people I work with, and students I teach homeschool or were homeschooled. I know that I know a disproportionately extravagant number of homeschoolers, but I wonder how many homeschoolers the average person knows or works with in the course of a week?
I think homeschooling in America is a solid example of people trying to use metis to resist central planning.
I’ve seen another reference to Haldol as “hound dog”:
A lot of these ideas go back to the 5th century BCE, with Hippodamus of Miletus.
I have a friend who claims to have been sent on a weird trip by loratadine. Yes, loratadine the mild, non-drowsy antihistamine. Could be an interaction with one of the many other medications she’s on. At first I didn’t believe her, but it’s not outside the realm of possibility.
As several other people have mentioned, this reminds me a lot of Antifragile (I actually haven’t read it, but I’ve read several articles by Taleb on the same subject).
This also reminds me a bit of The Secret of Our Success. I haven’t read that book either, but I’ve read this article that says that the book “argues that cultural learning, in particular by copying seemingly useless steps, is one of the great intellectual advantages of our species”: https://www.scotthyoung.com/blog/2016/02/02/blind-copying/
For example, Native Americans were able to avoid pellagra (niacin deficiency) from corn by soaking it in ash. Europeans ignored this seemingly unnecessary step (it was just the Native Americans’ “custom”) and suffered the consequences.
What is nixtamalization? … players of Jeopardy take note (Scrabble-lovers too). 🙂
Singapore came up in a conversation today, and I realized that my be another counter-example to Scott’s arguments. I’ve never been to Singapore, but I understand it has achieved success in several areas through very heavy-handed, top-down planning.
On the other hand, the government seems a little too oppressive for my tastes.
Singapore seems to be answerable by Lee Kuan Yew being a political genius. Normal argument that a genius given free-rein can do better than democracy, but the downsides are striking.
Singapore is a conundrum and an outlier. You may have the answer, Tracy. Although it has to be more than genius; it has to be benevolent genius.
So, probably a bit late in the thread to be bringing this up, but this seems to be a fairly standard example of the tradeoffs between standardization and customization.
With standardization, you reduce technical debt for newcomers and corruption (read: bad customization) in the middle layers; but if you do it wrong you lose out on the good customization that let you do so well in the first place. With customization, you get closer to what you actually want in each circumstance, but if you do it wrong no one will be able to pick up where you left off (or clean up any of your messes).
This book is an interesting counterpoint to James Scott’s arguments about peasant life: https://www.amazon.com/Rational-Peasant-Political-Economy-Society/dp/0520039548
This reminds me of the thoughts I had when reading Solzhenitsyn’s “Matryona’s House”
https://en.wikipedia.org/wiki/Matryona's_Place,
and Tolstoy’s “Master and Man”.
https://en.wikipedia.org/wiki/Master_and_Man_(short_story)
I don’t usually nit-pick about grammar or word choice on this blog, but I’d really appreciate it if you’d fix “stacking the dice” to either “stacking the deck” or “using loaded dice”. There’s no grace to “stacking the dice”.