Tyler Cowen writes about cost disease. I’d previously heard the term used to refer only to a specific theory of why costs are increasing, involving labor becoming more efficient in some areas than others. Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that.
Cowen assumes his readers already understand that cost disease exists. I don’t know if this is true. My impression is that most people still don’t know about cost disease, or don’t realize the extent of it. So I thought I would make the case for the cost disease in the sectors Tyler mentions – health care and education – plus a couple more.
First let’s look at primary education:
There was some argument about the style of this graph, but as per Politifact the basic claim is true. Per student spending has increased about 2.5x in the past forty years even after adjusting for inflation.
At the same time, test scores have stayed relatively stagnant. You can see the full numbers here, but in short, high school students’ reading scores went from 285 in 1971 to 287 today – a difference of 0.7%.
There is some heterogenity across races – white students’ test scores increased 1.4% and minority students’ scores by about 20%. But it is hard to credit school spending for the minority students’ improvement, which occurred almost entirely during the period from 1975-1985. School spending has been on exactly the same trajectory before and after that time, and in white and minority areas, suggesting that there was something specific about that decade which improved minority (but not white) scores. Most likely this was the general improvement in minorities’ conditions around that time, giving them better nutrition and a more stable family life. It’s hard to construct a narrative where it was school spending that did it – and even if it did, note that the majority of the increase in school spending happened from 1985 on, and demonstrably helped neither whites nor minorities.
I discuss this phenomenon more here and here, but the summary is: no, it’s not just because of special ed; no, it’s not just a factor of how you measure test scores; no, there’s not a “ceiling effect”. Costs really did more-or-less double without any concomitant increase in measurable quality.
So, imagine you’re a poor person. White, minority, whatever. Which would you prefer? Sending your child to a 2016 school? Or sending your child to a 1975 school, and getting a check for $5,000 every year?
I’m proposing that choice because as far as I can tell that is the stakes here. 2016 schools have whatever tiny test score advantage they have over 1975 schools, and cost $5000/year more, inflation adjusted. That $5000 comes out of the pocket of somebody – either taxpayers, or other people who could be helped by government programs.
Second, college is even worse:
Note this is not adjusted for inflation; see link below for adjusted figures
Inflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year. No, it’s not because of decreased government funding, and there are similar trajectories for public and private schools.
I don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment. Do you think that modern colleges provide $18,000/year greater value than colleges did in your parents’ day? Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000?
(or, more realistically, have $72,000 less in student loans to pay off)
Was your parents’ college even noticeably worse than yours? My parents sometimes talk about their college experience, and it seems to have had all the relevant features of a college experience. Clubs. Classes. Professors. Roommates. I might have gotten something extra for my $72,000, but it’s hard to see what it was.
Third, health care. The graph is starting to look disappointingly familiar:
The cost of health care has about quintupled since 1970. It’s actually been rising since earlier than that, but I can’t find a good graph; it looks like it would have been about $1200 in today’s dollars in 1960, for an increase of about 800% in those fifty years.
This has had the expected effects. The average 1960 worker spent ten days’ worth of their yearly paycheck on health insurance; the average modern worker spends sixty days’ worth of it, a sixth of their entire earnings.
This time I can’t say with 100% certainty that all this extra spending has been for nothing. Life expectancy has gone way up since 1960:
Extra bonus conclusion: the Spanish flu was really bad
But a lot of people think that life expectancy depends on other things a lot more than healthcare spending. Sanitation, nutrition, quitting smoking, plus advances in health technology that don’t involve spending more money. ACE inhibitors (invented in 1975) are great and probably increased lifespan a lot, but they cost $20 for a year’s supply and replaced older drugs that cost about the same amount.
In terms of calculating how much lifespan gain healthcare spending has produced, we have a couple of options. Start with by country:
Countries like South Korea and Israel have about the same life expectancy as the US but pay about 25% of what we do. Some people use this to prove the superiority of centralized government health systems, although Random Critical Analysis has an alternative perspective. In any case, it seems very possible to get the same improving life expectancies as the US without octupling health care spending.
The Netherlands increased their health budget by a lot around 2000, sparking a bunch of studies on whether that increased life expectancy or not. There’s a good meta-analysis here, which lists six studies trying to calculate how much of the change in life expectancy was due to the large increases in health spending during this period. There’s a broad range of estimates: 0.3%, 1.8%, 8.0%, 17.2%, 22.1%, 27.5% (I’m taking their numbers for men; the numbers for women are pretty similar). They also mention two studies that they did not officially include; one finding 0% effect and one finding 50% effect (I’m not sure why these studies weren’t included). They add:
In none of these studies is the issue of reverse causality addressed; sometimes it is not even mentioned. This implies that the effect of health care spending on mortality may be overestimated.
Based on our review of empirical studies, we conclude that it is likely that increased health care spending has contributed to the recent increase in life expectancy in the Netherlands. Applying the estimates form published studies to the observed increase in health care spending in the Netherlands between 2000 and 2010 [of 40%] would imply that 0.3% to almost 50% of the increase in life expectancy may have been caused by increasing health care spending. An important reason for the wide range in such estimates is that they all include methodological problems highlighted in this paper. However, this wide range inicates that the counterfactual study by Meerding et al, which argued that 50% of the increase in life expectancy in the Netherlands since the 1950s can be attributed to medical care, can probably be interpreted as an upper bound.
It’s going to be completely irresponsible to try to apply this to the increase in health spending in the US over the past 50 years, since this is probably different at every margin and the US is not the Netherlands and the 1950s are not the 2010s. But if we irresponsibly take their median estimate and apply it to the current question, we get that increasing health spending in the US has been worth about one extra year of life expectancy.
This study attempts to directly estimate a %GDP health spending to life expectancy conversion, and says that an increase of 1% GDP corresponds to an increase of 0.05 years life expectancy. That would suggest a slightly different number of 0.65 years life expectancy gained by healthcare spending since 1960)
If these numbers seem absurdly low, remember all of those controlled experiments where giving people insurance doesn’t seem to make them much healthier in any meaningful way.
Or instead of slogging through the statistics, we can just ask the same question as before. Do you think the average poor or middle-class person would rather:
a) Get modern health care
b) Get the same amount of health care as their parents’ generation, but with modern technology like ACE inhibitors, and also earn $8000 extra a year
Fourth, we se similar effects in infrastructure. The first New York City subway opened around 1900. Various sources list lengths from 10 to 20 miles and costs from $30 million to $60 million dollars – I think my sources are capturing it at different stages of construction with different numbers of extensions. In any case, it suggests costs of between $1.5 million to $6 million dollars/mile = $1-4 million per kilometer. That looks like it’s about the inflation-adjusted equivalent of $100 million/kilometer today, though I’m very uncertain about that estimate. In contrast, Vox notes that a new New York subway line being opened this year costs about $2.2 billion per kilometer, suggesting a cost increase of twenty times – although I’m very uncertain about this estimate.
Things become clearer when you compare them country-by-country. The same Vox article notes that Paris, Berlin, and Copenhagen subways cost about $250 million per kilometer, almost 90% less. Yet even those European subways are overpriced compared to Korea, where a kilometer of subway in Seoul costs $40 million/km (another Korean subway project cost $80 million/km). This is a difference of 50x between Seoul and New York for apparently comparable services. It suggests that the 1900s New York estimate above may have been roughly accurate if their efficiency was roughly in line with that of modern Europe and Korea.
Fifth, housing (source:
Most of the important commentary on this graph has already been said, but I would add that optimistic takes like this one by the American Enterprise Institute are missing some of the dynamic. Yes, homes are bigger than they used to be, but part of that is zoning laws which make it easier to get big houses than small houses. There are a lot of people who would prefer to have a smaller house but don’t. When I first moved to Michigan, I lived alone in a three bedroom house because there were no good one-bedroom houses available near my workplace and all of the apartments were loud and crime-y.
Or, once again, just ask yourself: do you think most poor and middle class people would rather:
1. Rent a modern house/apartment
2. Rent the sort of house/apartment their parents had, for half the cost
So, to summarize: in the past fifty years, education costs have doubled, college costs have dectupled, health insurance costs have dectupled, subway costs have at least dectupled, and housing costs have increased by about fifty percent. US health care costs about four times as much as equivalent health care in other First World countries; US subways cost about eight times as much as equivalent subways in other First World countries.
I worry that people don’t appreciate how weird this is. I didn’t appreciate it for a long time. I guess I just figured that Grandpa used to talk about how back in his day movie tickets only cost a nickel; that was just the way of the world. But all of the numbers above are inflation-adjusted. These things have dectupled in cost even after you adjust for movies costing a nickel in Grandpa’s day. They have really, genuinely dectupled in cost, no economic trickery involved.
And this is especially strange because we expect that improving technology and globalization ought to cut costs. In 1983, the first mobile phone cost $4,000 – about $10,000 in today’s dollars. It was also a gigantic piece of crap. Today you can get a much better phone for $100. This is the right and proper way of the universe. It’s why we fund scientists, and pay businesspeople the big bucks.
But things like college and health care have still had their prices dectuple. Patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand.
It’s actually even worse than this, because we take so many opportunities to save money that were unavailable in past generations. Underpaid foreign nurses immigrate to America and work for a song. Doctors’ notes are sent to India overnight where they’re transcribed by sweatshop-style labor for pennies an hour. Medical equipment gets manufactured in goodness-only-knows which obscure Third World country. And it still costs ten times as much as when this was all made in the USA – and that back when minimum wages were proportionally higher than today.
And it’s actually even worse than this. A lot of these services have decreased in quality, presumably as an attempt to cut costs even further. Doctors used to make house calls; even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office. This study notes that for women who give birth in the hospital, “the standard length of stay was 8 to 14 days in the 1950s but declined to less than 2 days in the mid-1990s”. The doctors I talk to say this isn’t because modern women are healthier, it’s because they kick them out as soon as it’s safe to free up beds for the next person. Historic records of hospital care generally describe leisurely convalescence periods and making sure somebody felt absolutely well before letting them go; this seems bizarre to anyone who has participated in a modern hospital, where the mantra is to kick people out as soon as they’re “stable” ie not in acute crisis.
If we had to provide the same quality of service as we did in 1960, and without the gains from modern technology and globalization, who even knows how many times more health care would cost? Fifty times more? A hundred times more?
And the same is true for colleges and houses and subways and so on.
The existing literature on cost disease focuses on the Baumol effect. Suppose in some underdeveloped economy, people can choose either to work in a factory or join an orchestra, and the salaries of factory workers and orchestra musicians reflect relative supply and demand and profit in those industries. Then the economy undergoes a technological revolution, and factories can produce ten times as many goods. Some of the increased productivity trickles down to factory workers, and they earn more money. Would-be musicians leave the orchestras behind to go work in the higher-paying factories, and the orchestras have to raise their prices if they want to be assured enough musicians. So tech improvements in the factory sectory raise prices in the orchestra sector.
We could tell a story like this to explain rising costs in education, health care, etc. If technology increases productivity for skilled laborers in other industries, then less susceptible industries might end up footing the bill since they have to pay their workers more.
There’s only one problem: health care and education aren’t paying their workers more; in fact, quite the opposite.
Here are teacher salaries over time (source):
Teacher salaries are relatively flat adjusting for inflation. But salaries for other jobs are increasing modestly relative to inflation. So teacher salaries relative to other occupations’ salaries are actually declining.
Here’s a similar graph for professors (source):
Professor salaries are going up a little, but again, they’re probably losing position relative to the average occupation. Also, note that although the average salary of each type of faculty is stable or increasing, the average salary of all faculty is going down. No mystery here – colleges are doing everything they can to switch from tenured professors to adjuncts, who complain of being overworked and abused while making about the same amount as a Starbucks barista.
This seems to me a lot like the case of the hospitals cutting care for new mothers. The price of the service dectuples, yet at the same time the service has to sacrifice quality in order to control costs.
And speaking of hospitals, here’s the graph for nurses (source):
Female nurses’ salaries went from about $55,000 in 1988 to $63,000 in 2013. This is probably around the average wage increase during that time. Also, some of this reflects changes in education: in the 1980s only 40% of nurses had a degree; by 2010, about 80% did.
And for doctors (source)
Stable again! Except that a lot of doctors’ salaries now go to paying off their medical school debt, which has been ballooning like everything eles.
I don’t have a similar graph for subway workers, but come on. The overall pictures is that health care and education costs have managed to increase by ten times without a single cent of the gains going to teachers, doctors, or nurses. Indeed these professions seem to have lost ground salary-wise relative to others.
I also want to add some anecdote to these hard facts. My father is a doctor and my mother is a teacher, so I got to hear a lot about how these professions have changed over the past generation. It seems at least a little like the adjunct story, although without the clearly defined “professor vs. adjunct” dichotomy that makes it so easy to talk about. Doctors are really, really, really unhappy. When I went to medical school, some of my professors would tell me outright that they couldn’t believe anyone would still go into medicine with all of the new stresses and demands placed on doctors. This doesn’t seem to be limited to one medical school. Wall Street Journal: Why Doctors Are Sick Of Their Profession – “American physicians are increasingly unhappy with their once-vaunted profession, and that malaise is bad for their patients”. The Daily Beast: How Being A Doctor Became The Most Miserable Profession – “Being a doctor has become a miserable and humiliating undertaking. Indeed, many doctors feel that America has declared war on physicians”. Forbes: Why Are Doctors So Unhappy? – “Doctors have become like everyone else: insecure, discontent and scared about the future.” Vox: Only Six Percent Of Doctors Are Happy With Their Jobs. Al Jazeera America: Here’s Why Nine Out Of Ten Doctors Wouldn’t Recommend Medicine As A Profession. Read these articles and they all say the same thing that all the doctors I know say – medicine used to be a well-respected, enjoyable profession where you could give patients good care and feel self-actualized. Now it kind of sucks.
Meanwhile, I also see articles like this piece from NPR saying teachers are experiencing historic stress levels and up to 50% say their job “isn’t worth it”. Teacher job satisfaction is at historic lows. And the veteran teachers I know say the same thing as the veteran doctors I know – their jobs used to be enjoyable and make them feel like they were making a difference; now they feel overworked, unappreciated, and trapped in mountains of paperwork.
It might make sense for these fields to become more expensive if their employees’ salaries were increasing. And it might make sense for salaries to stay the same if employees instead benefitted from lower workloads and better working conditions. But neither of these are happening.
So what’s going on? Why are costs increasing so dramatically? Some possible answers:
First, can we dismiss all of this as an illusion? Maybe adjusting for inflation is harder than I think. Inflation is an average, so some things have to have higher-than-average inflation; maybe it’s education, health care, etc. Or maybe my sources have the wrong statistics.
But I don’t think this is true. The last time I talked about this problem, someone mentioned they’re running a private school which does just as well as public schools but costs only $3000/student/year, a fourth of the usual rate. Marginal Revolution notes that India has a private health system that delivers the same quality of care as its public system for a quarter of the cost. Whenever the same drug is provided by the official US health system and some kind of grey market supplement sort of thing, the grey market supplement costs between a fifth and a tenth as much; for example, Google’s first hit for Deplin®, official prescription L-methylfolate, costs $175 for a month’s supply; unregulated L-methylfolate supplement delivers the same dose for about $30. And this isn’t even mentioning things like the $1 bag of saline that costs $700 at hospitals. Since it seems like it’s not too hard to do things for a fraction of what we currently do things for, probably we should be less reluctant to believe that the cost of everything is really inflated.
Second, might markets just not work? I know this is kind of an extreme question to ask in a post on economics, but maybe nobody knows what they’re doing in a lot of these fields and people can just increase costs and not suffer any decreased demand because of it. Suppose that people proved beyond a shadow of a doubt that Khan Academy could teach you just as much as a normal college education, but for free. People would still ask questions like – will employers accept my Khan Academy degree? Will it look good on a resume? Will people make fun of me for it? The same is true of community colleges, second-tier colleges, for-profit colleges, et cetera. I got offered a free scholarship to a mediocre state college, and I turned it down on the grounds that I knew nothing about anything and maybe years from now I would be locked out of some sort of Exciting Opportunity because my college wasn’t prestigious enough. Assuming everyone thinks like this, can colleges just charge whatever they want?
Likewise, my workplace offered me three different health insurance plans, and I chose the middle-expensiveness one, on the grounds that I had no idea how health insurance worked but maybe if I bought the cheap one I’d get sick and regret my choice, and maybe if I bought the expensive one I wouldn’t be sick and regret my choice. I am a doctor, my employer is a hospital, and the health insurance was for treatment in my own health system. The moral of the story is that I am an idiot. The second moral of the story is that people probably are not super-informed health care consumers.
This can’t be pure price-gouging, since corporate profits haven’t increased nearly enough to be where all the money is going. But a while ago a commenter linked me to the Delta Cost Project, which scrutinizes the exact causes of increasing college tuition. Some of it is the administrative bloat that you would expect. But a lot of it is fun “student life” types of activities like clubs, festivals, and paying Milo Yiannopoulos to speak and then cleaning up after the ensuing riots. These sorts of things improve the student experience, but I’m not sure that the average student would rather go to an expensive college with clubs/festivals/Milo than a cheap college without them. More important, it doesn’t really seem like the average student is offered this choice.
This kind of suggests a picture where colleges expect people will pay whatever price they set, so they set a very high price and then use the money for cool things and increasing their own prestige. Or maybe clubs/festivals/Milo become such a signal of prestige that students avoid colleges that don’t comply since they worry their degrees won’t be respected? Some people have pointed out that hospitals have switched from many-people-all-in-a-big-ward to private rooms. Once again, nobody seems to have been offered the choice between expensive hospitals with private rooms versus cheap hospitals with roommates. It’s almost as if industries have their own reasons for switching to more-bells-and-whistles services that people don’t necessarily want, and consumers just go along with it because for some reason they’re not exercising choice the same as they would in other markets.
(this article on the Oklahoma City Surgery Center might be about a partial corrective for this kind of thing)
Third, can we attribute this to the inefficiency of government relative to private industry? I don’t think so. The government handles most primary education and subways, and has its hand in health care. But we know that for-profit hospitals aren’t much cheaper than government hospitals, and that private schools usually aren’t much cheaper (and are sometimes more expensive) than government schools. And private colleges cost more than government-funded ones.
Fourth, can we attribute it to indirect government intervention through regulation, which public and private companies alike must deal with? This seems to be at least part of the story in health care, given how much money you can save by grey-market practices that avoid the FDA. It’s harder to apply it to colleges, though some people have pointed out regulations like Title IX that affect the educational sector.
One factor that seems to speak out against this is that starting with Reagan in 1980, and picking up steam with Gingrich in 1994, we got an increasing presence of Republicans in government who declared war on overregulation – but the cost disease proceeded unabated. This is suspicious, but in fairness to the Republicans, they did sort of fail miserably at deregulating things. “The literal number of pages in the regulatory code” is kind of a blunt instrument, but it doesn’t exactly inspire confidence in the Republicans’ deregulation efforts:
Here’s a more interesting (and more fun) argument against regulations being to blame: what about pet health care? Veterinary care is much less regulated than human health care, yet its cost is rising as fast (or faster) than that of the human medical system (popular article, study). I’m not sure what to make of this.
Fifth, might the increased regulatory complexity happen not through literal regulations, but through fear of lawsuits? That is, might institutions add extra layers of administration and expense not because they’re forced to, but because they fear being sued if they don’t and then something goes wrong?
I see this all the time in medicine. A patient goes to the hospital with a heart attack. While he’s recovering, he tells his doctor that he’s really upset about all of this. Any normal person would say “You had a heart attack, of course you’re upset, get over it.” But if his doctor says this, and then a year later he commits suicide for some unrelated reason, his family can sue the doctor for “not picking up the warning signs” and win several million dollars. So now the doctor consults a psychiatrist, who does an hour-long evaluation, charges the insurance company $500, and determines using her immense clinical expertise that the patient is upset because he just had a heart attack.
Those outside the field have no idea how much of medicine is built on this principle. People often say that the importance of lawsuits to medical cost increases is overrated because malpractice insurance doesn’t cost that much, but the situation above would never look lawsuit-related; the whole thing only works because everyone involved documents it as well-justified psychiatric consult to investigate depression. Apparently some studies suggest this isn’t happening, but all they do is survey doctors, and with all due respect all the doctors I know say the opposite.
This has nothing to do with government regulations (except insofar as these make lawsuits easier or harder), but it sure can drive cost increases, and it might apply to fields outside medicine as well.
Sixth, might we have changed our level of risk tolerance? That is, might increased caution be due not purely to lawsuitphobia, but to really caring more about whether or not people are protected? I read stuff every so often about how playgrounds are becoming obsolete because nobody wants to let kids run around unsupervised on something with sharp edges. Suppose that one in 10,000 kids get a horrible playground-related injury. Is it worth making playgrounds cost twice as much and be half as fun in order to decrease that number to one in 100,000? This isn’t a rhetorical question; I think different people can have legitimately different opinions here (though there are probably some utilitarian things we can do to improve them).
To bring back the lawsuit point, some of this probably relates to a difference between personal versus institutional risk tolerance. Every so often, an elderly person getting up to walk to the bathroom will fall and break their hip. This is a fact of life, and elderly people deal with it every day. Most elderly people I know don’t spend thousands of dollars fall-proofing the route from their bed to their bathroom, or hiring people to watch them at every moment to make sure they don’t fall, or buy a bedside commode to make bathroom-related falls impossible. This suggests a revealed preference that elderly people are willing to tolerate a certain fall probability in order to save money and convenience. Hospitals, which face huge lawsuits if any elderly person falls on the premises, are not willing to tolerate that probability. They put rails on elderly people’s beds, place alarms on them that will go off if the elderly person tries to leave the bed without permission, and hire patient care assistants who among other things go around carefully holding elderly people upright as they walk to the bathroom (I assume this job will soon require at least a master’s degree). As more things become institutionalized and the level of acceptable institutional risk tolerance becomes lower, this could shift the cost-risk tradeoff even if there isn’t a population-level trend towards more risk-aversion.
Seventh, might things cost more for the people who pay because so many people don’t pay? This is somewhat true of colleges, where an increasing number of people are getting in on scholarships funded by the tuition of non-scholarship students. I haven’t been able to find great statistics on this, but one argument against: couldn’t a college just not fund scholarships, and offer much lower prices to its paying students? I get that scholarships are good and altruistic, but it would be surprising if every single college thought of its role as an altruistic institution, and cared about it more than they cared about providing the same service at a better price. I guess this is related to my confusion about why more people don’t open up colleges. Maybe this is the “smart people are rightly too scared and confused to go to for-profit colleges, and there’s not enough ability to discriminate between the good and the bad ones to make it worthwhile to found a good one” thing again.
This also applies in health care. Our hospital (and every other hospital in the country) has some “frequent flier” patients who overdose on meth at least once a week. They comes in, get treated for their meth overdose (we can’t legally turn away emergency cases), get advised to get help for their meth addiction (without the slightest expectation that they will take our advice) and then get discharged. Most of them are poor and have no insurance, but each admission costs a couple of thousand dollars. The cost gets paid by a combination of taxpayers and other hospital patients with good insurance who get big markups on their own bills.
Eighth, might total compensation be increasing even though wages aren’t? There definitely seems to be a pensions crisis, especially in a lot of government work, and it’s possible that some of this is going to pay the pensions of teachers, etc. My understanding is that in general pensions aren’t really increasing much faster than wages, but this might not be true in those specific industries. Also, this might pass the buck to the question of why we need to spend more on pensions now than in the past. I don’t think increasing life expectancy explains all of this, but I might be wrong.
I mentioned politics briefly above, but they probably deserve more space here. Libertarian-minded people keep talking about how there’s too much red tape and the economy is being throttled. And less libertarian-minded people keep interpreting it as not caring about the poor, or not understanding that government has an important role in a civilized society, or as a “dog whistle” for racism, or whatever. I don’t know why more people don’t just come out and say “LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY, AND WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.” State that clearly, and a lot of political debates take on a different light.
For example: some people promote free universal college education, remembering a time when it was easy for middle class people to afford college if they wanted it. Other people oppose the policy, remembering a time when people didn’t depend on government handouts. Both are true! My uncle paid for his tuition at a really good college just by working a pretty easy summer job – not so hard when college cost a tenth of what it did now. The modern conflict between opponents and proponents of free college education is over how to distribute our losses. In the old days, we could combine low taxes with widely available education. Now we can’t, and we have to argue about which value to sacrifice.
Or: some people get upset about teachers’ unions, saying they must be sucking the “dynamism” out of education because of increasing costs. Others people fiercely defend them, saying teachers are underpaid and overworked. Once again, in the context of cost disease, both are obviously true. The taxpayers are just trying to protect their right to get education as cheaply as they used to. The teachers are trying to protect their right to make as much money as they used to. The conflict between the taxpayers and the teachers’ unions is about how to distribute losses; somebody is going to have to be worse off than they were a generation ago, so who should it be?
And the same is true to greater or lesser degrees in the various debates over health care, public housing, et cetera.
Imagine if tomorrow, the price of water dectupled. Suddenly people have to choose between drinking and washing dishes. Activists argue that taking a shower is a basic human right, and grumpy talk show hosts point out that in their day, parents taught their children not to waste water. A coalition promotes laws ensuring government-subsidized free water for poor families; a Fox News investigative report shows that some people receiving water on the government dime are taking long luxurious showers. Everyone gets really angry and there’s lots of talk about basic compassion and personal responsibility and whatever but all of this is secondary to why does water costs ten times what it used to?
I think this is the basic intuition behind so many people, even those who genuinely want to help the poor, are afraid of “tax and spend” policies. In the context of cost disease, these look like industries constantly doubling, tripling, or dectupling their price, and the government saying “Okay, fine,” and increasing taxes however much it costs to pay for whatever they’re demanding now.
If we give everyone free college education, that solves a big social problem. It also locks in a price which is ten times too high for no reason. This isn’t fair to the government, which has to pay ten times more than it should. It’s not fair to the poor people, who have to face the stigma of accepting handouts for something they could easily have afforded themselves if it was at its proper price. And it’s not fair to future generations if colleges take this opportunity to increase the cost by twenty times, and then our children have to subsidize that.
I’m not sure how many people currently opposed to paying for free health care, or free college, or whatever, would be happy to pay for health care that cost less, that was less wasteful and more efficient, and whose price we expected to go down rather than up with every passing year. I expect it would be a lot.
And if it isn’t, who cares? The people who want to help the poor have enough political capital to spend eg $500 billion on Medicaid; if that were to go ten times further, then everyone could get the health care they need without any more political action needed. If some government program found a way to give poor people good health insurance for a few hundred dollars a year, college tuition for about a thousand, and housing for only two-thirds what it costs now, that would be the greatest anti-poverty advance in history. That program is called “having things be as efficient as they were a few decades ago”.
In 1930, economist John Maynard Keynes predicted that his grandchildrens’ generation would have a 15 hour work week. At the time, it made sense. GDP was rising so quickly that anyone who could draw a line on a graph could tell that our generation would be four or five times richer than his. And the average middle-class person in his generation felt like they were doing pretty well and had most of what they needed. Why wouldn’t they decide to take some time off and settle for a lifestyle merely twice as luxurious as Keynes’ own?
Keynes was sort of right. GDP per capita is 4-5x greater today than in his time. Yet we still work forty hour weeks, and some large-but-inconsistently-reported percent of Americans (76? 55? 47?) still live paycheck to paycheck.
And yes, part of this is because inequality is increasing and most of the gains are going to the rich. But this alone wouldn’t be a disaster; we’d get to Keynes’ utopia a little slower than we might otherwise, but eventually we’d get there. Most gains going to the rich means at least some gains are going to the poor. And at least there’s a lot of mainstream awareness of the problem.
I’m more worried about the part where the cost of basic human needs goes up faster than wages do. Even if you’re making twice as much money, if your health care and education and so on cost ten times as much, you’re going to start falling behind. Right now the standard of living isn’t just stagnant, it’s at risk of declining, and a lot of that is student loans and health insurance costs and so on.
What’s happening? I don’t know and I find it really scary.
On teachers’ salaries, at least, the NCES data is data for WAGES only, not total compensation. Given their civil service protections, automatic, seniority based promotions, extremely generous benefits and pensions, a picture of flatlining wages is inaccurate. I’d also look at the sheer NUMBER of teachers employed over time, as I guarantee you that, nationwide, student/faculty ratios (to say nothing of student to administrator ratios) were substantially higher 40 years ago.
On regulations, you have to look earlier than 75. The neo-liberal era (roughly 1980 till 2000, or maybe 2008) did slow things down a little, but that wave is clearly spent.
Probably exaggerated. In liberal-as-hell Massachusetts, teachers don’t get “professional status” until their fourth year in a district and work year-to-year contracts before that. It is easy to make teachers’ lives miserable by giving them classes full of awful students, piling on paperwork, etc. until they quit. And NCLB and similar federal programs have given administrations a palette of tools they can use to pile black marks onto teachers’ performance records to justify firing even long-standing teachers with tenure.
“Promotions”? In most schools, the only “promotion” you get from a “teacher” position is “department head”. This isn’t automatic, and it isn’t really a promotion. You get a small stipend, teach one less class, and do a hell of a lot more paperwork.
You mean the guaranteed pay increases? As I mentioned before, they’re only guaranteed starting the fourth year since the school can decline to renew a contract any time before that. And the increases aren’t exactly mind-blowing either. Here’s a fairly typical Massachusetts school where an 11-year veteran with a Ph.D. makes $74,000, less than twice as much as the first-year bachelor’s degree with $47,000 (see Appendix A). (One major difference being, the 10-year veteran probably has a mortgage and children.) And you lose all that seniority as soon as you change districts, which is not exactly an uncommon occurrence.
Those salaries are above the national median, but this is metrowest MA where cost of living is fairly high.
Can you be more specific about what’s so generous about the benefits? I don’t think teachers get especially comprehensive insurance plans, even here in Massachusetts where the unions are pretty damned strong. I think they just get the standard bottom-of-the-line Romneycare plan. (And it’s part of their compensation like any other employer-provided health insurance.)
Nowadays, teachers get 403bs which are just mismanaged 401ks.
Most of the drop in student/teacher ratio happened between 1965 and 1985. The ratio has changed very little since 2005 as far as I can tell, while costs have if anything accelerated since then.
Also, in 1965, teachers could hit or humiliate kids who got out of line. Now, teachers aren’t even allowed to give “time out”s. The lack of disciplinary tools available to teachers now decreases the size of the largest manageable class. (I also think culture is relevant to this, where kids used to have more respect for adults in general and teachers in particular.)
>Probably exaggerated. In liberal-as-hell Massachusetts, teachers don’t get “professional status” until their fourth year in a district and work year-to-year contracts before that. It is easy to make teachers’ lives miserable by giving them classes full of awful students, piling on paperwork, etc. until they quit.
Do you know when most people get “impossible to fire status”? Never. And that’s a good thing.
>You mean the guaranteed pay increases? As I mentioned before, they’re only guaranteed starting the fourth year since the school can decline to renew a contract any time before that. And the increases aren’t exactly mind-blowing either. Here’s a fairly typical Massachusetts school where an 11-year veteran with a Ph.D. makes $74,000, less than twice as much as the first-year bachelor’s degree with $47,000 (see Appendix A).
And if that 11 year veteran slacks off and does a lousy job, how does it affect his pay? Not at all
>Most of the drop in student/teacher ratio happened between 1965 and 1985. The ratio has changed very little since 2005 as far as I can tell, while costs have if anything accelerated since then.
not according to the data presented by massivefocusedinaction
>Also, in 1965, teachers could hit or humiliate kids who got out of line. Now, teachers aren’t even allowed to give “time out”s. The lack of disciplinary tools available to teachers now decreases the size of the largest manageable class. (I also think culture is relevant to this, where kids used to have more respect for adults in general and teachers in particular.)
sure, but that’s not the point of the discussion.
Right. And most people in this case includes public school teachers.
Maybe you misunderstood. “Professional status” just means “you have a stable contract, but we can fire you if you give us cause.” Without professional status, the school can decline to renew the teacher’s contract with no penalties whatsoever to the school. Teacher is SOL if they were depending on having the job and their contract isn’t renewed.
Uh, it will impact it 100% because they will get fired. As I mentioned before, accountability measures like NCLB give administrators a lot of leverage.
I know at least one fantastic teacher who had professional status and a record of year-by-year increasing the percentage of passes on the AP chemistry exam in her class in an otherwise poor-performing school who got fired because she didn’t comply with a bunch of the bullshit busywork ed reform stuff the administration tried to push on her. So not only is it possible to fire a slacker with tenure. It’s possible to fire a demonstrably extremely effective teacher with tenure!
I know, I know, “anecdotal”, but it’s better than any evidence you’ve provided so far.
That’s not a graph of student/teacher ratio. (In fact, separating growth in faculty from growth in student body seems to be used to make the changes harder to compare, whereas just giving the ratio would make the comparison much easier — so this seems a little intentionally misleading.) Here’s one:
It’s hard to get good data for the last few years, so I’ll drop the “been flat since 2005 claim” in favor of “the drop in student/teacher ratio is linear but the cost increases are superlinear”.
If cultural and behavioral changes are the cause of part of the cost increase, then I think that’s a pretty interesting result, even if it doesn’t give you any justification for maligning school teachers.
I take it you’re walking back the “extremely generous benefits and pensions” part?
Do you know when most people get “stable contract status”? Never.
Not sure why you are playing hide the ball and talking about one school district in one state when we are talking about the national public school system.
I think you misunderstand. “Professional status” for a teacher is the same as “having a W-2” for non-teachers. It’s like a normal salary job where if they fire you without cause you can collect unemployment insurance.
Not having professional status is like having a temp job that lasts one year and your employer has the choice to renew it or not. If they don’t renew it, you don’t get unemployment insurance. You don’t have any recourse. You have no grounds for a wrongful termination suit.
I’m much more secure in my private sector tech job than my wife (I’d expect my bosses to give me at least one warning before firing me), I make about twice as much in salary, I have better benefits.
I don’t work as hard, I’m not appreciably smarter than her, and I probably don’t add nearly as much value to society.
The assumption that I’m playing “hide the ball” seems pretty uncharitable. I’m talking about schools in MA because that’s what I know. I’m also talking about schools in MA because MA has high cost of living, unparalleled public school performance, and disproportionately strong teachers’ unions compared to the rest of the country, so if the accusations of lazy overpaid coddled teachers don’t apply to MA, then it’s hard to see how they’d apply anywhere.
Actually, people in most countries get “stable contract status” as soon as they start a full-time professional job. It’s called, you know, a contract.
>Maybe you misunderstood. “Professional status” just means “you have a stable contract, but we can fire you if you give us cause.”
Mass teachers get tenure after 4 years, not just professional status. Tenure is repeatedly described as “permanent” in these laws and comes with the full suite of civil service protections, which in practice, amounts to practically un-fireable.
>Without professional status, the school can decline to renew the teacher’s contract with no penalties whatsoever to the school. Teacher is SOL if they were depending on having the job and their contract isn’t renewed.
You mean just like every employee in the private sector who can be fired at will?
1. Tenure and professional status are synonyms.
2. Please provide evidence for the “un-fireable” claim. I know for a fact that it’s possible to fire tenured teachers, because I’ve seen it happen even to teachers who were demonstrably very good at their jobs (as I already mentioned).
No. As I already explained, I am a private sector employee and if I got fired without cause or laid off, I would be able to collect unemployment insurance. My wife does not have professional status, so the school can just decline to renew her contract. She gets no unemployment insurance, no recourse to union intervention, no recourse to wrongful termination suits.
Please read more carefully.
Edit: I haven’t read it yet, but you may find it interesting:
That quote is from probably the most powerful teacher’s union in the country.
>2. Please provide evidence for the “un-fireable” claim. I know for a fact that it’s possible to fire tenured teachers with cause, because I’ve seen it happen even to teachers who were demonstrably very good at their jobs (as I already mentioned).
According to the latest NCES data the average district in Mass has 209 teachers, an average of 2.2 of whom are fired every year for cause. Of those 2.2, 1.9 are nontenured. They don’t have a breakdown of the average tenured/non-tenured ratio, but assuming the national average of 55% tenured teachers, that means 1/5 of one percent of tenured teachers are fired in a given year, vastly below private sector rates, and slightly above the national rate for teachers.
>She gets no unemployment insurance, no recourse to union intervention, no recourse to wrongful termination suits.
Again, except for UI, I don’t get any of those things either. Now the UI thing should be fixed, but it’s not the end of the universe and doesn’t outweigh the many perks she gets when she does get tenured.
>“They continue to be evaluated thoroughly. If they are struggling, they must be given guidance on how to improve. If they fail to improve, there is a speedy process of dismissing them,” wrote the MTA.
That they say it does’t make it true. the numbers don’t lie. “Due process rights” are precisely the civil service protections I’m talking about. To fire anyone, an employer has to prove to an arbitrator that they are firing them for cause. That is a very large burden, the process can take months or years and doesn’t always work. A year ago, the DC metro caught fire and it was found that the maintainer signing off on that section was faking his records. he appealed his dismissal arguing that while it was true he faked his records, so did everyone else so it wasn’t fair to fire him. he got his job back.. The union then sued again for not giving it back to him fast enough.
These protections are orders of magnitudes more than exist in any private sector workplace, and they are poisonous for efficiency.
Should we be comparing to the private sector average? Isn’t that driven largely by the industries with extremely high turnover (entertainment, hospitality, food service)? Teachers are certainly fired at a low rate, but it’s not clear that the private sector average is driven by poor performance or that high turnover rates actually improve performance in any industry. (Granted, that’s moving the goalposts a bit.)
“I don’t get any of that, except the relevant one!”
Unemployment insurance and tenure are fulfilling the same function, here. People require more stability in their lives than most employers are willing to offer. The solution in most public sector settings is unemployment insurance, and the solution in most educational settings is tenure.
She doesn’t get “many perks” when she gets tenure. She will still have a quarterly review process that is considerably more adversarial than my annual review process, and she can be disciplined and ultimately fired on the basis of poor performance reviews. Again, I’ve seen this happen so “unfireable” is at best an exaggeration. I cannot think of any other “perks” that she gets except for a guaranteed raise that is, in practice, much lower than the increases in pay I’ve consistently gotten in my private sector job over the last 7 years. If anything, that’s the opposite of a “perk”. There is one “perk”, which is tenure, and it is comparable in value to unemployment insurance.
And it can feel like the end of the world if you are trying to buy a house and start a family and those efforts depend on the income from that non-tenured teaching job.
This might or might not cost the company as much as unemployment insurance, depending on the circumstances. In some cases, “orders of magnitude” might be true, but I think that is only rare and isolated cases. I suspect there is not much difference in average.
I think you’re dramatically overestimating the extent to which lower job security benefits efficiency or similar measures in the private sector.
>Should we be comparing to the private sector average? Isn’t that driven largely by the industries with extremely high turnover (entertainment, hospitality, food service)? Teachers are certainly fired at a low rate, but it’s not clear that the private sector average is driven by poor performance or that high turnover rates actually improve performance in any industry. (Granted, that’s moving the goalposts a bit.) And remember, we’re not counting voluntary separations, but actual removals for cause.
I’m happy to credit this argument IF you can show evidence to it.
>“I don’t get any of that, except the relevant one!”
My point is that the others are far more relevant.
>Unemployment insurance and tenure are fulfilling the same function, here.
No, they don’t. UE doesn’t prevent your employer from getting rid of you if you do a bad job. Tenure does.
>She will still have a quarterly review process that is considerably more adversarial than my annual review process, and she can be disciplined and ultimately fired on the basis of poor performance reviews.
I’ve already showed the evidence that they are NOT more adversarial, at least when it comes to actual firings.
>Again, I’ve seen this happen so “unfireable” is at best an exaggeration.
anecdotes are not data. the data says that tenured teachers are almost never fired.
>I cannot think of any other “perks” that she gets except for a guaranteed raise that is, in practice, much lower than the increases in pay I’ve consistently gotten in my private sector job over the last 7 years.
You probably got those raises because you did a good job. I assume your wife also did a good job, and didn’t. But that’s not the point, the point is that if she HADN’T done a good job, she’d have gotten those raises. If you hadn’t gotten a good job, you’d have gotten nothing, or maybe fired. You’re considering only the upside for good workers and ignoring the bad. I fully grant you that, if you want to work hard, make a difference, and do a good job, public sector employment can be a raw deal, but that’s precisely the problem. the system, on the whole refuses to either punish bad behavior or reward good, which results in a lot more bad behavior.
> If anything, that’s the opposite of a “perk”. There is one “perk”, which is tenure, and it is comparable in value to unemployment insurance.
Mathematically, inability to be fired is vastly more valuable than part of your salary if you are fired.
>This might or might not cost the company as much as unemployment insurance, depending on the circumstances.
It’s not just the cost to the company, it’s cost to the managers who have to fill out all the paperwork and build a court case against bad employees.
>I think that is only rare and isolated cases. I suspect there is not much difference in average.
I’ve shown you evidence to the contrary. Please show me what makes you suspect the opposite.
I was more specific than you are taking into account: they are both filling the gap between the employee’s need for stability and the employer’s unwillingness to provide that stability. Yes, there are other effects that may be more or less desirable depending. For the record, I don’t mind considering the replacement of tenure with unemployment insurance for teachers. That’s just not the situation as it stands today.
The thing is, you only have really indirect evidence for bad behavior. If we assume you’re right on with the extent to which public school teachers are too safe from firing, then I think you’re vastly overestimating:
-the ease with which private sector employees are fired
-the positive effect such firings have on performance for private firms
-the negative effect on school performance of the difficulty to fire teachers
You’ve shown me evidence that federal employees are very hard to fire. You haven’t shown me evidence that this causes lots of inefficiency.
You also claimed that the difference in difficulty in firing was “orders of magnitude”, but this article says the private sector average is 3% and you previously cited statistics showing that MA fires public school teachers at a rate of 1% (including non-tenured), so at best I think you’ve vastly overstated your case.
I think you’ve fairly proven that it’s hard to fire tenured teachers, but you haven’t proven that this has much impact on school performance. Maybe we should look at the performance of schools compared to percentage of workforce who are tenured. My hypothesis: schools with higher percentages of tenured teachers will perform better than schools with lower percentages of tenured teachers.
This is because schools with more tenured teachers will have higher morale among teachers, there will be more solid working relationships between teachers and administrators, teachers will have more community ties, and teachers will on average be more experienced.
>I was more specific than you are taking into account: they are both filling the gap between the employee’s need for stability and the employer’s unwillingness to provide that stability.
That people might want them for the same reasons does NOT make them the same. those other effects are important.
>The thing is, you only have really indirect evidence for bad behavior.
I have plenty of direct evidence. Please stop moving goalposts.
>I’ve shown you evidence to the contrary. Please show me what makes you suspect the opposite.
No you haven’t. However, the contrary evidence is easy, private school teachers produce better results AND are paid less.
>You also claimed that the difference in difficulty in firing was “orders of magnitude”, but this article says the private sector average is 3% and you previously cited statistics showing that MA fires public school teachers at a rate of 1% (including non-tenured),
You’re mis-representing what I said. I said non-tenured teachers were harder to fire, and I was right. their rate of firing is 1/5 of one percent, more than a order of magnitude less.
>My hypothesis: schools with higher percentages of tenured teachers will perform better than schools with lower percentages of tenured teachers.
Show evidence of that. I’ve shown evidence to the contrary.
To make an apples-to-apples comparison, though, you need to compare private sector employes fired after at least four years of continuous employment to firing rates for tenured teachers. I suspect they’d probably be fairly comparable.
Many (from my experience, the vast majority) people who are going to be problem employees are pretty evidently problem employees within their first year. A four-year probationary period is going to weed out a huge number of potential problems before they ever make it into your “tenured teacher” category.
I’m curious about the claim that if a teacher’s contract is not renewed she doesn’t get unemployment insurance. That seems odd–could you explain why? I don’t know the rules in Massachessetts, but checking the web page for the program in California, under “eligibility” I find:
To be entitled to benefits, you must be:
Out of work due to no fault of your own.
Physically able to work.
Actively seeking work.
Ready to accept work.
Why wouldn’t an unemployed teacher qualify? Looking down the web page I don’t find any exception for teachers.
Doing a search for “teacher” it’s clear that teachers can receive unemployment insurance–there’s a discussion of special circumstances when they can’t (if the claim is filed during a recess period and they have an offer from the school to go back to work when the recess if over).
Here is the page for Massachusetts. It has a list of categories of workers that are not eligible, and teachers are not on it.
A search for “teacher” finds:
So as far as I can tell, your claim is not true for Massachusetts. Perhaps you can find something on the web page to support it?
@DavidFriedman: Any “temp” job (that is, a job with a defined end date, like a 1-year-long teacher contract) is ineligible for unemployment benefits. (I can only speak to OH and NC but I assume all States are the same there. Otherwise, you could get unemployment compensation after your summer internship, term of elected office, etc.)
Here are some articles describing how teachers in NYC cannot be fired. http://www.nydailynews.com/new-york/education/city-spend-29m-paying-educators-fire-article-1.1477027
In fact, in NYC there is a term for this “The Rubber Room”, see http://www.newyorker.com/magazine/2009/08/31/the-rubber-room
I don’t know what the correct rate for firing teachers is.
I do know that 0.5% is too low. Anyone who has worked knows that the thought that you could get together 200 of your coworkers — even skipping those who had only been on the job a year or two to leave short-termers out of your pool — and find only one slacker is nonsense.
Again, I don’t know what the correct firing rate is.
For separate anecdote, my dad got unemployment when he was let go from a private school at the end of the school year.
And according to the Massachusetts Teachers Association, laid-off teachers can collect unemployment: http://www.massteacher.org/memberservices/~/media/Files/legal/mta_rif_booklet_2014_web.pdf
I don’t think this is true (in any state). Do you have cites? I looked at several places that implied temp jobs always get unemployment, like this one.
I’ve been doing temp work for about 12 years now, and I think I’ve always qualified for unemployment, although I’ve never collected on it. Of course it is possible I get get where I live (Minnesota), and wouldn’t in other states, but I don’t think so.
Edit: I should add that I’ve heard quite often of workers that just work in the summer and live off of unemployment every winter. Anecdotal, so maybe not true, but again, I think it is true.
Teachers in PA [not national but a lot better than anecdotal,and fairly representative & large state] have a $50 Billion pension fund. Last decade the amount they get on retirement was spiked 25%, and years required dropped from 10 to 5 under Gov. Rendell. They also have gold-plated benefits. The amount the State is paying for teacher’s pensions was under $3bn annually a couple years ago, was $4.4B last year and will rise to $5.2 in a few years — just for the pension.
The avg HS teacher in my district makes $66k in salary, and $31k in benefits [this is all published, public data.] Longer tenured teachers make up to $97k in salary. You can also sort by ‘specialty’ and with Ns = 1 you get some interesting data: The kindergarten music teacher, Master’s degree, 20 years exp is earning over $125k in salary and bennies per year. For 9 months work.
Of course, senior administrators make significantly more than that!
PSERS costs are approximately 33% of school budgets already. Again, that’s just pension dollars, and not benefits or salaries.
With pensions, it’s less important what teachers get nowdays when they join than what they get when they leave – i.e. what has been promised to them 20 or 30 years ago.
Also, if it’s so easy to fire a teacher, how the phenomenon of “rubber rooms” in e.g. New York is explained, where teachers essentially spend years doing nothing because they can’t be allowed to teach (because of some fault) but can’t be fired either? Here HuffPo article claims it costs 22M a year:
From the collective bargaining agreement you linked
How many places of work guarantee a higher salary for more education, while also paying for that education?
Lots of interesting incentives for not using sick leave
Looking through this it appears as if this would increase to up to $84,000 a year if you hit specific sick leave incentives, and could be augmented with other activities (up to ~$16,000 a year for a dual role of head football and baseball coach appears to be the max), for 183 scheduled work days a year.
This link has a schedule for retirement benefits for Massachusettes teachers starting after 2012, how generous theses packages are depends on the contribution rates etc, but very few private salaried positions have anything like defined retirement benefits these days.
Doing a quick scroll through this contract I would say that none of the individual benefits listed seem excessive, but the sum total actually might well be. Frequently in the private sector you have the opportunity to work for a place with either good retirement benefits, or good health benefits, or good vacation packages, but it is pretty rare to have all three available along with other perks (continued education benefits).
We’re talking about the distinction between teachers and the rest of us. The fact that a bad boss can selectively try to make a teacher miserable is a red herring. The issue is why they are forced to resort to doing so, instead of just firing you. And the answer is “they’re not at-will employees, and they have the backing of a union” is an enormous job benefit.
Yup! In my town it’s 5%, like clockwork, good times or bad.
First, what percentage of contracts are not renewed? I don’t think it’s many.
Second, do you realize that DURING a contract–IOW, during one of those four years–the teacher still has more protection than any at-will employee, which is almost everyone else?
One wonders how many PhDs are working in public schools, outside administration. Answer: One percent, and I’d bet good money that a lot of them have administration roles. And of course one wonders what they’d be worth elsewhere (if they were good enough to teach college they’d probably be doing it), so i don’t know if this is much data. more to the point,
Another major difference probably being that the PhD. is paid for in whole or in part by the school while working, unlike many folks.And a lot of them are in an edu-field (pedagogy) and not a hard field (math) so are merely an administrator grooming tool.
Well, most obviously you get roughly 16 weeks of vacation instead of the normal 2-3 weeks. You also get a pension in many cases, and the above-mentioned job security; and the benefit of a union; and paid training; and…
Sure, if the employer got to tax the public to pay for it. Our district pays teachers most of their insurance costs.
Late on, you mention this, which I will depersonalize:
I don’t know about you, but for many people: Yes you do, and yes you are.
W/r/t work: Teachers put in long days like many folks–40 to 50 hour weeks are not uncommon–but they don’t put in MORE work than most other degreed professionals. Certainly not if you count the vacation. Yeah, they have occasional long days but so do we all.
W/r/t intelligence: If you’re dealing with an education major then yes, the chances are that your average STEM-major SSC reader is in fact smarter than the teacher.
Saw this table recently: http://www.aei.org/publication/friday-afternoon-links-21/ which suggests the teaching and admin staff grows much faster than student number.
Also, given that pensions cost very little immediately (or at least can be made to seem so) and are a good way to placate unions, I’d expect pension commitments increase for some time without noticeable effect on costs, until people start retiring and then it turns out pension fund is underfunded and needs more money in it. Which might be where we are now.
I think you mean “underpaid and overworked”?
“people can just increase costs and not suffer any increased demand because of it” -> decreased demand
“second,,” (double comma)
“elderly people live deal with it every day” -> deal
Also, in one of the first quotes, Scott writes “they conclude:” and then the quote starts with “based on x we conclude”. I would edit that to reduce redundancy.
“we se similar effects”
“grandchildrens’” -> “grandchildren’s”
“I think this is the basic intuition behind so many people, even those who genuinely want to help the poor, are afraid of “tax and spend” policies.” -> “…behind *why* so many people” ?
they comes in, get treated for their meth overdose
comes -> come
why does water costs
costs -> cost
Having worked in a school, I’d be inclined to accept Gazeboist’s correction, but to be fair, they have to teach classes of twelve to eighteen year olds all day, five days a week.
A million dollars/pounds/euro a year wouldn’t be enough to get me to do that. Plus these days, the amount of paperwork and box-ticking is crazy, and you can’t give a misbehaving student a clip round the ear/slap with the báta while they can swear at you, throw things at you, and physically assault you. Teachers are also being asked to be child-minders, do child-rearing (teaching them things their parents should be teaching them about how to be a human being), social workers and more.
From some angles, it’s a cushy job. From others, you need to be really dedicated to it because it’s not for everyone.
In isolation, it could have been either (though of course they would imply different things about his views), but in context Scott clearly meant the opposite of what he initially wrote.
I’d look further at the ratio of administrators to teachers/doctors etc, and the cost of complying with regulation for the fairly decentralized health and education sectors. Which line looks more like the education cost line?
Regarding your question on markets working, one of these things is delivered by a mostly free market, the other isn’t: Google’s first hit for Deplin®, official prescription L-methylfolate, costs $175 for a month’s supply; unregulated L-methylfolate supplement delivers the same dose for about $30.
Why can’t a manufacturer of L-methylfolate suppliments undercut the perscription sellers? That’s why markets aren’t delivering the cost savings they should be.
Saying regulations increase prices and limit market mobility is not something new or original, but part of the question is whether the cheaper price of the free market is actually worth it. Unless I fundamentally misunderstand what a grey-market is, I would imagine that the grey-market dealers are not going to be as cautious about their doses, or as picky about how they fabricate the materials. Part of the price of medicine is paying for the FDA-approved label, whether it’s through the better treatment of workers or the better quality material. The biggest problem with this is the fact that we don’t seem to be treating the workers better, nor getting better quality materials. (at least in the fields of education, medicine, and public works)
Why does it have to be the FDA? Why can’t it be some private credentialing organization that customers can choose to pay attention to or not?
If you want to convince us to buy grey-market pharmaceuticals, shouldn’t you be answering that question? Why aren’t there any private credentialing organization to tell which grey-market products are high quality? If there is one, why should I trust it? I know the FDA has a long reputation of strict certification, if you name some competing organization I’ve never heard of it might be a random scam.
Because the FDA is mandated by government fiat? Duh…?
There’s no point in a private organization setting safety requirements because the FDA has already set the minimum safety requirements way way too high (on average) and nobody can reduce it (short of fundamental changes to the law). Any organization doing this at a large enough scale to get noticed by the doctors would be noticed by the FDA and probably sued out of existence.
Also, I’m not trying to get you to buy gray-market pharmaceuticals. Where did you get that idea? I’m trying to point out that safety rules/screening don’t automatically imply the FDA.
One example here is helmets. They all have to be DOT certified (in the US). However there is a private organization with higher requirements (Snell). Some helmets get Snell certification and some do not. The distinction is generally understood (at least when it comes to motorcycle helmets) and some customers pay attention to Snell certification and some do not.
The benefit of Snell is that it is possible to set up competitor organizations with different standards without going through the heavy-weight process of legislation. It is also possible to have gradations for different people with different price and risk tolerances.
My questions is: why can’t we do the same for drugs?
For the specific case of nootropics, I think there’s a reddit board that does exactly that.
Whether it’s trustworthy I don’t know.
You’re right, my response was off-the-cuff and I didn’t think carefully about what exactly you were trying to argue for.
The system of having an independent organization like Snell works as long as there is just one of them. Once you have multiple rating agencies competing against each other to get the helmet manufacturers’ business the incentive scheme becomes perverse. Snell would be incentivized to give as many helmets as high a rating as possible because some other rating org could come in and undercut them. If I were building helmets why would I bring them to Snell if I can get an A+ rating from some other organization?
@CarpathoRusyn: By this logic, shouldn’t increased competition in, say, movie or video game review sites result in a race to the bottom where they’re all awarding everything 10/10 in a scramble for favors from studios? And yet that’s not what we see; if anything, the proliferation of review channels on the internet has made reviews more honest and critical in the last 15 years compared to the era before that where entertainment products were mostly reviewed by a few major magazines.
Personally, I’m far less trusting of rating and review agencies that don’t have competitors to verify their results and keep them honest. It massively increases the potential for bribery, pressure and other malfeasance.
@CarpathoRyson: the competition is two sided. As a consumer, would you trust a rating slapped on every helmet in the shop?
> Why can’t a manufacturer of L-methylfolate suppliments undercut the perscription sellers?
Because quartering the price is probably not going to quintuple the sales.
Regulated manufacturers quite likely have lower actual manufacturing costs, as they have larger scale. They have larger scale because most people want and can afford regulated drugs. Prices for those drugs are high because people tend to prefer not to die in horrible pain, and so place a high value on things that prevents that. Most of the money earned from a high price goes into marketing to turn that high potential value into a high price actually paid. Product development is just a special case of this; a drug that works better is easier to market.
The grey-market sellers have to massively undercut that to make it worth their while. They can still make a profit by leeching off the regulated sellers marketing.
I suspect this dynamic effects all the case with cost inflation; more gdp means people have more money, so are open to be persuaded to spend more on the really valuable stuff. That persuasion is expensive.
Perhaps I’m not asking my question well, why aren’t consumers with a prescription able to purchase the $30 supplements and keep the savings (surely there are some cheepskates, I know I’m one). In other words why aren’t the supplement makers capturing the expansion they could get by competing with the $170 prescription drug makers.
For the specific case of Deplin/Metafolin vs. grocery store folate supplements, the answer is basically that consumers are are perfectly free to rip up their prescription, buy supplements and pocket the difference.
However, doctors are not free to advise them to do this, from the risk-avoidance perspective outlined in this post, because if anything goes wrong the patient then has a potential malpractice lawsuit. Doctors theoretically have a free hand to prescribe off-label, suggest supplements or alternative therapies, but deviation from standard-of-care opens them up to increased liability. The “FDA Approved” stamp serves to cover their ass, and since they’re not the ones paying the cost difference they have zero-to-negative incentive to prescribe cheap supplements. Meanwhile, most patients don’t have the savvy or swaggering self-confidence to go around ripping up their prescriptions (and arguably they don’t directly pay the added cost, either) so they go along with it.
Our illustrious host goes into this at length in Fish – Now By Prescription:
“Is the public getting any service from LOVAZA™®© and DEPLIN™®©?
I say: yes! The companies behind these two drugs are doing God’s work; they are making the world a much better place. Their service is performing the appropriate rituals to allow these substances into the mainstream medical system.”
The conventional wisdom is that if the particular medication you need is available both Rx and OTC, you go with Rx if you have Rx coverage and OTC if you don’t. But some Rx plans will baffle you with bullshit, leading to difficult dilemmas. I don’t know whether methylfolate is something likely to be prescribed to a senior citizen, but let’s say you have Medicare Part D (D as in Donut Hole). If you go Rx, you’ll pay the first n-thousand (don’t know what the value of n is these days) out of pocket before you’re covered and therefore unambiguously in the “choose Rx over OTC” category. But if you start with OTC, you’ll never even start paying down your first-n-thousand and Rx coverage is infinity miles away. Actually, now that I think about it, that was just the normal insurance-with-deductible case. If I recall right Medicare Donut Hole is even more guesswork as the first m-thousand is covered, the next n-thousand not, and stuff above m+n thousand covered in theory, although I assume penetrating that part of the range of outcomes puts you near the top of some database query looking for plan subscribers to scrutinize more heavily. Actually using an insurance plan (especially health related) is basically high-stakes guesswork concerning your future needs and risk exposures. If insurance mitigates the worst-case outcome, then perhaps having additional variables deliberately thrown into your personal utility-maximizing algorithm is a worthwhile price for that. I’m largely agnostic on that question.
But the IP rent seekers putting a patent on some aspect of inserting methylfolate into a capsule, and the grey market entrepreneurs labeling sawdust as methylfolate, are not competing in the same market. One is working what’s left of the jobs-with-bennies crowd, while the other is working the growing avoid-talking-to-a-doctor-if-at-all-possible crowd. Incomparable products, incomparable consumers, etc. Some overlap between the two audiences due to FUD-generating algorithms (computer-aided-drafting of contracts), though.
Insurance agent specializing in Medicare here! Part D is a total clusterfuck. The cost structure goes like this:
Stage 1: Deductible – you pay total cost of your Rx. This is limited by law, but the limit has tended to increase each year. Currently it’s $400, and plans can choose to offer a lesser deductible, or none at all.
Stage 2: Initial Coverage – you pay roughly 25% of the total cost of your Rx, or more commonly the plan has a set of 4 or 5 price tiers with a flat copay assigned to each tier, designed to be actuarially equivalent to 25% overall. Depending on what exactly you’re taking and the plan you pick, you can wind up paying next to nothing, or over 50% of the total cost.
Stage 3: Donut Hole – you go in this when your total Rx cost for the year reaches the limit, currently $3700, again it tends to increase each year. Originally you did pay total cost here, but legislation was passed to gradually close this, so instead you pay a percentage, gradually decreasing each year until it reaches 25% in 2020.
Stage 4: Catastrophic – you reach this if your out-of-pocket cost reaches the limit, currently $4950, tends to increase, etc. You pay either 5% of total cost or a copay of a few dollars, whichever is more.
There is no Stage 5 at which you have 100% coverage.
Notice that the dollar amounts that get you into and out of the Donut Hole measure different things – getting in counts what you paid plus what your insurance company paid, while getting out counts only what you paid. (Plus 50% of the total cost of brand-name, but not generic, Rx you get in the donut hole, because reasons.)
Also, every Part D plan can have a different list of drugs that it covers, though naturally there’s a lot of overlap since they all have to meet Medicare’s coverage requirements. And different plans can cover the same drug in different price tiers. So basically if you take any meds and you’re picking a Part D plan you really should just run your meds through Medicare.gov’s online cost estimator, because estimating your costs under all the available plans by hand is a fool’s game.
For some good news, Part D plans can’t kick you off for having expensive drug needs, and there’s an enrollment period every year where people can freely change and cannot be refused coverage.
People don’t like dying of thirst, hunger or exposure and yet prices aren’t nearly as high for water/food/clothing.
Price relates to scarcity, and in medicine it is artificial.
Food and clothes are traded worldwide; a 4$ T shirt comes from somewhere south of China. A hand-made Aran jumper may be more expensive, but not so much as it would be if the alternative was going naked.
Of the expensive stuff, only medicine is traded world-wide. For reasons explained by someone else in another post, US pricing dominates the market in that.
So what you have is a bunch of national-level markets, where ‘cheap’ means ‘cheaper than more expensive competitors’, and ‘within budget’ means ‘within the budget we set by looking at the last thing built’. You get a bit of stuff at the edges like Americans going to Oxford instead of Harvard, or London hiring a German tunnel-drilling company, but that’s inherently going to be secondary.
Demand for all these things is more or less arbitrarily high, so the question is how efficiently that high demand can be translated into actual higher prices. Countries with staid, unionised public sectors will lazily raise prices slowly, countries with more profit driven and hard working private sectors will be more efficient at raising prices faster.
Which is whywhen you look at a graph of ‘% of gdp in public sector’ against ‘price of a state of the art jet fighter’, it is a straight line sloping downwards.
Many people had to work 60-hour weeks to get the F-35 to cost so much…
Everything is traded world wide. Food and $4 T shirts, but also every commodity, cars and trucks, airplanes, military equipment. 25% of US doctors are foreign born and medical tourism is a real industry.
You might have to be physically present to get a hair cut, but the barber can come from anywhere in the world.
I was vaguely aware this sort of thing was true, but didn’t realize the extent of it. I agree that this is perplexing and scary.
In the case of college education, some of this seems likely to be due to the zero-sum nature of the “degree prestige” game. A new college would be non-prestigious by definition, no matter how good the education was. If people are willing to pay $200K to go to a top-10 school, no one is able to undercut that by making a new top-10 school that costs less.
That doesn’t apply as obviously to schools lower on the totem poll, but I think credentialism might still be important. There are actually great new educational tools that are way cheaper: Udacity, Coursera, etc. For most people, though, they’re _too_ new and different to count as meaningful replacements to normal education, even if in reality they totally could substitute. Health care, education, and public infrastructure are all fields where a new fly-by-night company is not going to be taken very seriously.
Definitely true for colleges; increased demand (more students both as population increases and more jobs require a degree), and supply can’t increase for these credentialism reasons.
Health care is probably less affected by prestige and more by supply bottleneck; most people won’t care too much whether they go to Fancy University Hospital or Bob’s Hospital and Muffler Shop, but there are only so many white-coated humans around to do the work. (Also in healthcare we have lots of market failures; Bob’s might be way more expensive than Fancy U but there’s no way to know that until the work’s done and you get the bills).
It’s absolute insanity that schools cost as much as they do. We could replace the vast majority of teachers with software and still teach children what they need to know, especially in the higher grades. Why does each kid have to spend an entire year in a class even if they could finish in a semester? Why don’t teachers replace lectures in class with those online? Why does each kid need to spend five days a week in class if they’re spending half that time not doing anything productive?
Mild sarcasm below.
I suspect the answer will be one (or all) of the following:
Won’t somebody think of the children?!
You’re a racist!
Teachers are heroes, why do you hate America?
If you don’t have a degree in education [and pay dues to the teachers’ union] you don’t get an opinion [you child-hating nazi].
Treating children as individuals with different interests and abilities contributes to inequality.
It does seem Part of the reason the public sector and anything attached to it decays so quickly is that the public sector seeks out areas tied up with sacred values so that any funding cuts will activate the public’s taboo tradeoff outrage.
It’s like all governments inherently want to become HPMOR’S ministry of magic where they can employ 3/4th of the population in preventing each other from getting things done. Give it ten years and dark wizards will probably start showing up.
It’s always amazed me that public schoolteachers can literally walk out on their students in the middle of the day to go protest in the streets about how they don’t want to have their performance judged based on student outcomes and public sympathy is on their side.
Judging performance based on student outcomes has a lot of problems. If you judge by absolute improvement, you get problems when handed students who are good and don’t have a lot of room for improvement. On the other hand, it’s also possible for teachers to be given difficult students who are hard to improve, and the wrong method of judging will screw over those teachers instead. Judging by the absolute value of the outcome rather than improvement, of course, just benefits teachers given good students in the first place.
Comparing the amount the students have improved compared to how much they have improved least year by another teacher, of course, leads to problems if the other teacher picked the low hnging fruit.
There are also lots of things that teachers don’t control, but which vary between locations and affect the outcome of their students. You don’t exactly get to pick your class size.
Not to mention that student outcomes are difficult to measure anyway. Tests are very imperfect measures of how much students have learned, but are how it’s going to be measured.
That is entirely not my point. If they walk out because they’re demanding more money, they still earn public sympathy too.
On assessing teachers based on merit: I think this is the main strength of school choice. It’s hard to systematically judge a teacher objectively because there are so many confounding variables, but it’s easy for a parent to decide they don’t like the quality.
Often people retort that many parents don’t take enough of an interest for this to be meaningful, but doesn’t that just mean that worst case scenario we’re back where we started? I don’t think even a large population of parents making the exact wrong choices could do much worse than what we currently have.
A more general complaint: when people criticize policy proposals, they nearly always forget to ask “compared to what?” Any policy is going to have pros and cons, but criticizing a policy isn’t an exercise in pointing out the cons, it’s about comparing the trade-offs between policies.
On-line classes don’t work very well except for the highly motivated.
Same with live classes.
Right. The point of education is to create that motivation. That is exactly what we are paying teachers to do.
You think teachers are effective at motivating students? I’ll resist the urge to make a sarcastic quip and note that I simply don’t agree with that. Anyways the problem from online education comes from expecting students to work on it at home when they have all their various distractions. If they are in a school environment and are promised that if they finish early they can be done, then they’ll be much more motivated to do their work.
Same as with any other type of job I’ve ever encountered: some are, some aren’t.
In fact, this situation isn’t uncommon: teacher A can motivate student X but not student Y. Teacher B can motivate student Y but not student X.
Well, I can tell that your position on this comes from a completely dispassionate review of the available evidence. /s
You may not realize this, but there’s this new thing on the market called a “mobile phone”. It’s like a late 90’s-era laptop that fits in your pocket and has better internet connectivity. If you try to confiscate them to prevent kids from goofing off and flagrantly cheating, some of their parents will raise a mighty racket and the administrators will take their side because they have authority over teachers but not parents.
I never got this deal in school!
“same as live classes”
not really, 90% of people doing on-line learning classes drop out
Even if you were right that computers could never replace live lectures for the majority of students(as if they are paying attention now), then we could still save a lot of money just by letting the highly motivated go this route. And if students using their phone was such a problem that teachers aren’t allowed to take them away, then how is that any different than now?
It sounds like you are talking about k-12 and my understanding is that the primary purpose of k-12 is keeping kids in a classroom for five days a week.
The cynic in means says yes but I don’t think this is true. Most people genuinely believe that students are learning so much and the only reason they don’t learn more is because there isn’t enough funding. Of course, for younger kids, it makes sense to double up as a daycare but not for high schoolers. If we really thought it was necessary, at the very least we could give them more free time inside of school instead of wasting their time with busy work.
I don’t think anyone genuinely believes this, I think alot of people genuinely believe that that is what their supposed to believe and that if you challenge the ability of education to save the wicked and miserable your a bad person like Peter Thiel.
I suspect people support “education” as a sacred value because your racist if you don’t or your a bad parent if you don’t or your a bad kid who betrayed everyone whose helped you if you don’t or your unamerican if you don’t, but I think most people realise that outside of hard skills (how to weld, how to keep do the books for a business, etc.) education is pretty nebulous and hard to Destin guise from just growing up.
Like I’ve just completed a philosophy and English degree and I genuinely can’t tell what if anything I’ve learned that I can attribute to the degree vs everything else I’ve been reading or doing for the past four years. I feel like my one fourth year seminar on Hobbes really influenced my thinking and another Political philosophy class got me to take traditional values/values that aren’t mine seriously but just looking at the kind of stuff I would read in my spare time it seems like i would have gotten to bother of those on my own.
Again most of the effects of “education” are indestinguishable from just growing up.
Maybe there’s a large subset of people that won’t grow at all without outside influence? But then what separated that population from those that won’t grow even with education and parents pushing them?
The problem with college education is that employers (and that’s the relevant audience) value colleges based on current and past employees from that college, or stories from other employers about their employees.
That means that getting any kind of reputation takes a generation – 20+ years before enough people have any kind of assessment of how good the place is, and likely longer for it to really align with the quality of education.
That’s a huge advantage if you’re overcharging and terrible (I think we’ve heard a fair amount of unpleasant things about Trump University recently, but that’s far from the only example, just the one most of us have heard of).
But if you’re super-efficient and good, you’re going to take a long time to build the reputation you deserve.
If we look at the top-rated universities worldwide, the most recently-founded is usually Stanford/Caltech (both 1891) or Peking (1898). More than half are usually pre-1800 foundations.
I’ve wondered about this issue as well, though not as thoroughly as you. Two things that often come to mind that aren’t directly addressed in your six possible resolutions to the puzzle:
(1) At least in higher education, and probably in medicine, the number of people employed by institutions has grown tremendously. At universities, for example, the number of administrators has grown by well over 100% in the past decades, and in general the number of people not directly involved in teaching or research is vast. This drives costs up, though it doesn’t lead to a rise in university salaries. Of course, this doesn’t solve the puzzle, and it begs the question of what has driven this increase in personnel. I don’t know, except that there’s probably a much higher expectation of “support” services than there used to be. (This isn’t to say this is good or bad.) Also, these support people are perhaps ones who in times past would be employed in agriculture or manufacturing — things that are much less labor intensive than they used to be.
(2) Have the real costs of “real” necessities gone down? Certainly this is the case for food, which is vastly cheaper than it was decades ago. A consequence may be that the “excess” money gets absorbed by things like higher education, health care, etc., without much complaint.
Combining these: it’s nice to have cushy levels of support; there’s a willing pool of people who want to have jobs in these areas; there’s money to burn. Result: we spend more on education, health care, etc., though the salaries per person in these areas don’t go up.
I see your second point as a possible explanation as well; I haven’t thought this all through, but a quick pass seems to make sense. We spend a lot less of our income than we used to on food, clothing and household goods even as our income has gone up. The extra money’s going to go somewhere.
People now have extra money to spend on things. Housing, college education, and medical care are things where the supply hasn’t increased significantly, so the costs for the supply that exists will necessarily go up. In the case of housing, the supply of good housing hasn’t increased significantly to keep up with demand. In the case of health care, increasing life expediencies and expectations about end-of-life care increased the demand.
Why are colleges spending money on extra administrators and luxuries? Because they have the money. The school’s going to be full either way, so why not charge what the market can bear? Colleges aren’t competing based on cost (at least until you get into the for-profit sector).
Primary and secondary education are a bit more complicated. This theory doesn’t explain why urban school expenditures are so high compared to the rest of the country, but increasingly the biggest determinator of where people with kids want to live is “how good are the schools?” This means that that’s what they’re going to be nagging their local governments about. If your chances of getting re-elected depend on school funding more than anything else, you’re going to fund schools. I guess this could drive up costs for poorer school districts as well, as they’re still pulling from the same pool of teachers and administrators. I can’t quite reconcile this with ‘teacher salaries are flat’, unless the increase is in non-salary costs like pensions.
>People now have extra money to spend on things. Housing, college education, and medical care are things where the supply hasn’t increased significantly, so the costs for the supply that exists will necessarily go up.
the supply of those things HAS increased though. we send far more people to college, buy bigger houses, and get way more medical treatments
Perhaps I should have phrased it as ‘supply increase has not (and can not) keep up with demand’. Everyone can have food sufficiency, TV, a car and a cell phone. Not everyone can send their kid to a top tier college or live in a top tier school district. Some colleges will be at the top of the pile, and some areas will be better to live in than others.
If you’ve got extra money, buying another TV or a more expensive car doesn’t seem as useful on a long term basis as moving to a better neighborhood or pushing your kid to a better college.
My point is that I don’t think we’re bidding up a fixed supply, we’re actively buying more stuff. the average house today is nearly twice the size is was 40 years ago, and has fewer people living in it. In 1950, medical care consisted of a few wonder drugs, some surgical procedures, and some nurses to keep you comfortable while you died. We’re cramming tons more stuff into old boxes, then acting surprised that they’re getting bigger.
> Not everyone can send their kid to a top tier college or live in a top tier school district.
That argument applies to positional goods, but not every good/service experiencing cost disease is positional. I think education and housing are (at least partly), but medicine and transportation infrastructure are not.
I think medicine is, at least more than you might suspect.
The second a new treatment is invented, it is expected to become available to everyone in society. Prisons are being sued by human rights organizations for not providing 100k+ treatments for Hepatitis C, for free, to their inmates.
Everyone expects to have access to the best medical care, and the idea that certain types of care might not be available to certain people strikes most as morally reprehensible.
Matt M: That’s a really good point. Have you thought of any potential solutions to this problem (that is, of people having stupid moral opinions)?
I’m the wrong person to ask. My solution to every problem is the same. Abolish the government. People can either afford stuff or they can’t. If they can’t, they go without it. And that doesn’t bother me one bit.
Perhaps this qualifies as a stupid moral opinion?
I don’t think it’s a stupid moral opinion, merely an unpractical one. Unfortunately so, since I share that opinion. The hard part, it seems to me, is getting people to go along with it. It also seems to me that the other side has a big advantage wrt propaganda, because they don’t necessarily feel to the need to be truthful.
Matt, there are two problems I have with that moral opinion:
1. Some people will die if we actually implemented it. That’s a moral problem, IMO (I’m thinking of poor people with expensive illnesses, and also people with disabilities and children with crappy parents. Also, people that lose their jobs, particularly during recessions when it’s hard to find a replacement)
2. Those people will hurt other people in trying not to die. That’s a practical problem. Giving them stuff (money, food, housing, healthcare) is likely cheaper than the damage they do trying not to die.
That’s basically McArdle’s position on health care costs – if we want to spend 15% of the economy on health care (and since we are, by revealed preference we must want to), who is anyone to criticize? (Not that I agree…)
Building anything where people already are is more expensive, and that includes building schools and hospitals. NIMBYism and giving 100 different parties veto power over things getting built is often a good way to protect property rights, but it also means that a lot of things never happen.
Yes, there’s more support, but also the expectations go up. Scott gave the example of hospitals that had everybody in one large room, but now put everybody in private rooms. Are we given the choice now? Well, were we given the choice back then? The comfort improved and the cost increased and we took the tradeoff, maybe not always individually but collectively. Now we have a choice between private and semi-private rooms and there are a ton of people who would not accept a semi-private room. Hence the facilities costs are larger and it probably requires more staff to supervise the same number of patients.
A lot of the increase is probably little things. Some public infrastructure has a “1 percent for art” requirement; that wasn’t there when the New York subway was built. Freeway overpasses are now decorated (which I think is a good thing, as people will be looking at them for 30 years). And there are a lot of private construction consultants hired on any given project: consultants to handle public outreach, requirements for some amount of design and study on multiple alternatives for every route.
For subways, I’d love to see two kilometers picked for being as comparable as possible from the US and elsewhere, and then a study that drills down into line-item costs as far as possible.
Now we have a choice between private and semi-private rooms and there are a ton of people who would not accept a semi-private room.
Isn’t there a safety factor also? The more patients sharing a bathroom, the more contagion. The more patients in a room, the more chance of foul-ups: the patient gets the wrong medicine, or the wrong patient gets the amputation.
Plus the stress, and decreased accuracy, ln trying to talk to the doctor over the sound of the other patient’s television.
Good thought. I’d love to see a graph showing the total number of people employed in a given profession by year.
The more I think about it, the more plausible this suggestion seems to me. I’m reminded of everyone running to one side of a ship as it sinks. The jobs aren’t in agriculture anymore, they aren’t in manufacturing anymore, and everyone’s getting a college degree – guess I’ll become a medical code filer.
There are some legitimate reasons for some of the rise in admin staff at universities – everyone needs IT support nowadays.
A rise in IT support should, in theory, roughly correspond to reductions in say, accountants adding things up by hand, or secretaries writing out documents and filing them away, or whatever.
A large amount of university support staff today is things like title IX diversity counselors. Black holes of cost devoid of any real value whatsoever.
It’s also from universities hiring 6 people and keeping them all half-busy as a result of bureaucratic empire-building and social status (more underlings = more prestige) competition between deans and department heads. In the years my wife worked for the University of California, she mentioned seeing a lot of that.
Also: hiring their own recent graduates to various support roles, in order to inflate employment numbers for recent graduates. Not the biggest factor, but has taken place at every program I’ve seen.
reductions in say, …secretaries writing out documents and filing them away, or whatever.
They say that some day that, with all this new computer technology, we’re going to get the paperless office! I’ve been hearing that for fifteen years or more. Meanwhile, I’ve just been handed a 46-page new policy that has come in from one of our overseeing bodies and I have to make two photocopies and file them away in different files 🙂
As best I remember, the main job of secretaries in academic departments when I first became a professor was typing and retyping things, largely things written by professors.
That job was mostly eliminated by the shift to professors typing and revising their own documents on word processors. One would think that would have sharply decreased the number of secretaries. But it doesn’t seem to have happened–if anything the opposite.
@ DF. There certainly has been a reduction of secretaries in private industry. Well they are called admins now. Yes they used to do typing, but also a lot of filing too. Neither one is needed anymore. It seems only VPs have admins now — I think what they mostly do is make travel arrangements, since the muckety mucks are constantly traveling. And I suppose set up conference rooms for meetings and spend a lot of time getting two or more muckety mucks in contact, since all of them are probably on the road.
It is nice to see that there are some productivity benefits to a highly computerized office — hopefully someday it will spread to academia.
On (2), this is probably just me revealing my economic ignorance but my instinctive reading would be that presumably inflation is related to the average increase in costs of things, so a story that certain things have inflation-busting prices rises could equally be reported as “staples such as food, as well as certain other things such as technology, have become radically cheaper”. Though I’m not sure if that works.
The way I’d instinctively want to model it is ‘median salary person used to work 20 hours a week to pay for food, 10 for housing, 5 for education/health and 5 for other fun stuff. Now it’s only 2 hours a week for food but 20 for housing and 15 for education/health, and 10 for fun stuff. This is 47 hours but they still only work 40 so they’re all in debt’
Any thoughts on how women being integrated in the workforce and consequently, leaving fields such as education and healthcare might have influenced costs?
women were always in the workforce, only recently are they beginning to see the labor they have always undertaken appropriately valued
I think that’s his point? If fifty years ago, women were doing unvalued care and hospitality work, and today they’re doing the same work but getting paid a fair wage, that would look like a massive increase in the cost of care and hospitality, with no other change.
Keynes: “If a man marries his maid, all else being equal, GDP goes down” 🙂
Why would we see a near constant, linear increase?
the transition is smoothed by certain industries shedding their gendered attitude sooner than others, by certain communities providing the amenities needed for women to work, by the noise generated by the labor of women being decreasingly undervalued.
women still aren’t getting paid a fair wage (all fields considered), not by any studies’ conclusions, but the consensus seems to be that we are getting closer
but women weren’t just aiding care and hospitality. pre-ww2 especially, they were unpaid ancillary labor alongside their husbands, especially in the military, mining, healthcare or agriculture industries. Or, with less economic significance, acting as their husband’s unpaid secretary in more white-collar jobs. And those are just the first-order effects.
Unpaid? Did they not reap the benefits of his salary?
@aldi – “women still aren’t getting paid a fair wage (all fields considered), not by any studies’ conclusions…”
If this were true, why not go to one of those fields that pay women an unfair wage, hire only women at a wage 10% above the standard unfair female wage, and outcompete everyone else due to the significant increase in cost-effectiveness?
Blacks used to suffer outright, explicit wage discrimination. If I recall, the result was that white workers organized specifically to keep blacks from taking all the jobs and/or driving down wages, as the black workers were obviously just as good and considerably cheaper. Why doesn’t the same thing happen with male and female workers in the present?
Also, nybbler’s question.
Nyb, that’s favour economy, not being paid. Hell, even a slave is fed out of their master’s pocket, but we don’t traditionally consider that “pay”.
FC: I’m not sure, but some studies have found that fields have their salaries go down when women enter them. That suggests that it’s something to do with bargaining power – perhaps perception that women are unqualified in some fields make their wages go down in others?
There’s going to be some influence; when women were allowed into few jobs other than teacher, that acted as a subsidy for schools. Though that effect would probably pre-date the big cost increases (it would have petered out by the 1980s or so, right?)
For most of these, you can handwave it as the effect of increased government regulation + spending, back in the day when men were men and we didn’t need no stinkin’ OSHA it was much cheaper.
But what the hell is going on with subway construction costing 10 times more here than in Europe? As much as Americans like to complain about out-of-control government, in Europe the situation is much worse, and yet somehow they’re still building stuff cheaper even with mandatory vacation and without at-will employment. I just don’t get it.
One possible theory: The sort of people who keep costs down in the public sector in Europe are working for Silicon Valley in the US.
In fact, some of them are leaving those european institutions for the lucrative millenarian jobs in various silicon locales
I recently read a very similar hypothesis from a certain dark contrarian school of political theory. I might not agree with much of their thoughts but I’m more than happy to steal their lingo. They called places like Singapore and Silicon Valley “IQ Shredders”, as they attract large amounts of high IQ adults to move there, and have extremely low birth rates. The argument was that these places skim high IQ talent from the available talent pool but don’t replace the talent in future generations, instead generating immense material wealth. I think its an interesting idea, and I’m suspect its true to a certain degree, but I’d hesitate to ascribe it total blame for the variance in costs. It seems more likely to be one factor of many.
Grand narrative arguments are notoriously hard to even slightly confirm. Having said that, I’m pretty sure my team at (top 5 tech company) could launch and run a school better than almost any current school, at a much cheaper cost. In fact, look at the Silicon Valley rocket companies. They have an idea, are ruthlessly smart and efficient, and hire and properly manage rocket scientists.
If Facebook or Amazon wanted to disrupt high school education or hosiptitals, could they? Can google go create a tech startup hospital in Pacific Heights, trying experimental approaches and choosing their patients? Can Amazon go to downtown Seattle and disrupt the current traffic dynamics with their own new solutions? Can the Facebook guys who build brilliant machine learning models for spatial dynamics search for the best ways to build a new subway at a cheap price?
It’s a compelling idea. All of our most profound benefits and cost savings go through the tech tunnel. It’s just that, at the moment, a good chunk of that benefits our life in terms of watching machines beat humans at Go, Facebook image classification algorithms, and cheap Amazon deliveries. When humans tend to prefer good family health and education.
Could it be that the smart people are leaving those fields to work at Amazon?
Hospitals, maybe. Not education.
Education is not something that’s amenable to technological solutions. It involves an interpersonal relationship between student and teacher. Students won’t learn anything unless they find some personal connection to the material, and that pretty much always comes through a personal connection to a teacher. That personal connection is what motivates the student to do work that they’d rather not do, and almost everything worthwhile about education consists in making students do stuff they’d rather not do.
(In other cases, students learn by forming a personal connection to the material without a human intermediary, but then those are autodidacts and we know they’re pretty rare.)
Technology doesn’t help. When a teacher gives a laptop to a student, he is putting the most distracting possible physical device between himself and the student. That makes it harder to form the personal connection that will motivate the student to learn, not easier.
Your evidence being what?? I don’t have to have a personal connection to something to learn it, so I think your premise is wrong. By the way I heard about a program where the classes are basically taught by computers and the teachers just say what their tablets tell them to do and it apparently has so far done as well or better than actual teachers for like $100/year or something? (In India maybe?)
You’re able to learn something without being interested in it or otherwise having no motivation to do so? I’m skeptical.
I mean, maybe if I challenged you on something like this, you’d be like “well I’m not interested in the differences between Volkswagen carburetors year-to-year, but just to show you, I’ll learn the shit out of it!” And then maybe you would. But your motivation would be to prove me wrong, and that would make me an effective teacher, our interpersonal relationship having provided you with the motivation you needed to learn the subject.
If your metric is “getting a certain score on a test that’s been rigged to pass as many students as possible despite flagging literacy rates and innumeracy” then sure. If your metric is “achieving literacy, numeracy, and general competence in contributing to society,” then I am skeptical.
I don’t mean disrupt education as in “Create a shiny new app to help kids learn math.” More like, try a ton of different experiments, many of which require massive capital expenditure on structures, teachers, and new methods of attracting top teaching talent in different ways. Measure everything over 1,5,10 years (etc). Maybe experiment with finding high achieving future engineer type students early and double down on investing in them.
Have funding tied to success, eliminate things that don’t work early on. Measure student outcomes ruthlessly. (etc).
Our current system is basically “Hire low talent people to teach. If they are bad pay them more. Make them teach all students in basically the same method. Measure outcome by how they score on a standardized math test. If they score poorly take away their money, but then also try to find ways to give them more money.” (Exaggerating for rhetorical purposes, but this is close to truth)
> Education is not something that’s amenable to technological solutions.
I think your definition of “technological solution” is too narrow. Even if it’s true that a personal connection with a teacher is critical, it is possible to have personal connections with teachers in the context of a technological solution.
Examples of hypotheses an experimental education project could test:
1) Maybe we should start the school day later, as every doctor in the universe recommends.
2) Maybe there are different aspects to the “interpersonal relationship” that could be factored in a different way. For example, you could try having the teacher rotate through 1-on-1 or small group meetings with kids, while otherwise the kids use recorded lectures or interactive apps or just plain old books for learning the actual content. Or you could see if going back to a single teacher for the whole school day works better than the separation by field that current schools use.
3) You could A/B test class lengths to see if it’s better to have longer or shorter classes. Or you could try the block scheduling thing some experimental schools do where you take one class at a time for a month, and then do something else the next month.
4) You could try giving the kids more options for the classes they take. This could happen by, for instance, providing transportation to other locations when necessary, or having specialist teachers who teach the same class across multiple schools (each of which wouldn’t be able to justify the salary on its own).
Those are just off the top of my head, I’m sure a proper brainstorming session or literature review could turn up dozens more.
The key to any of these would be having good visibility into the actual effects the interventions have. We all agree that tests are not a great way to measure things, so have personal interviews with the kids and/or parents and/or teachers, have some sort of online upvote/downvote system, or come up with another way to get visibility. Probably the effects will differ based on location, etc, so it’s probably important to have some degree of federalism, so long as there is plenty of support and visibility for the value of escaping inertia.
If Google was working on this, I have no doubt that they could do 10x better with the same money that public schools currently use.
There are 3.7 million full-time-equivalent elementary and secondary school teachers in the US. Good luck filling up the ranks with Stanford grads.
wysinwygymmv I think you are discounting the possibility for technological gains in the classroom a little too much here.
First, teachers do not spend all day every day trying to actively engage with students. Depending on the quality/youth of the teacher, a decent amount of every school day is going to be spent working individually/in small groups. This time is currently drastically under utilized, and I will agree that for some students it is unlikely that any technology will be sufficient to make this time useful. Not all students are the same, and I have had the good fortune to teach many students who had their own interests and passions and who would be happy to pursue those interests if given the tools to do so, sadly the Texas state history textbook isn’t the best tool.
While I agree giving a kid a laptop connected to the internet is worse than giving them a book during these periods of time, it seems to me that there is some space between book and unfettered access to the internet where technology might create a better option for individual reading/study time. Something as simple as a smart book that can offer live built in dictionary/pronunciation/translation support would be seriously beneficial. The problem of trivial inconveniences is a serious one, especially for a struggling ESL student who could just give up and pass notes instead.
Going back to, students are different. If you have good tools and motivated self learners this increases the time that the teacher has to focus on students who need more of the direct one on one interaction for their time in school to be time well spent.
I think a lot of the ways technology is currently being implemented into the classroom are counterproductive/neutral but that is not the same as there being no room for technology to improve education.
I am speaking mainly to elementary school as that is where I have experience, but I would think that the proportion of students better able to take advantage of better self study tools would increase in the higher grades if anything.
>There are 3.7 million full-time-equivalent elementary and secondary school teachers in the US. Good luck filling up the ranks with Stanford grads.
It’s not just about Stanford grads. The point is properly matching talent-talent. I had a great, truly great, Caltech trained physics teacher at my totally mediocre highschool. Every year he got to teach 2-4 high potential kids, and the rest were average. There is nothing in the world wrong with being average morally. But it’s not a huge use of resources to have average students learning from a top 0.01% teacher.
I know that sounds bad, but the point is the students don’t meaningfully learn from him in the way his talents let him teach. If you instead had a more hierarchical structure, where students were filtered by IQ, you could match students-teachers more efficiently. (basically how the university system works).
[plucked from context] “[J]ust to show you, I’ll learn the shit out of it!” And then maybe you would. But your motivation would be to prove me wrong, and that would make me an effective teacher, our interpersonal relationship having provided you with the motivation you needed to learn the subject.
Imagine half a dozen Scotts each running an SSC-cloned blog dedicated to a particular course of study — following a textbook chapter by chapter, if necessary. We’re all effective teachers of each other: motivating, critiqueing each other’s ideas, bringing in outside/advanced material.
By the way I heard about a program where the classes are basically taught by computers and the teachers just say what their tablets tell them to do
Does that class include students with learning disabilities, physical disabilities and behavioural problems? If you can stream classes by ability so you get the self-motivated disciplined kids who are able to use the equipment and learn from it and don’t need any kind of assistance and who got their breakfast in the morning and will get a meal in the evening when they get home from school and they aren’t wondering if their parents are going to have a meltdown and a screaming row when they’re trying to do their homework, it’ll work fine.
Not every single pupil is like that, though, and that’s our problem: what do you do with the ones who need help? We tried special schools and educational theory moved against those. Do we go back to them? What’s the solution?
All of the people in this comment tree need to learn about the Regional Educational Laboratories and comparable organizations in other countries — they have been fruitfully using statistics based findings to ~disrupt~ education for decades. They are often comprised of the top-flight grads you are talking about. This is all already being done, though it may be quickly undone with DeVos at the helm.
I’d make the argument that part of the reason, not all of it, but a decent chunk of the reason for the lower costs is the lack of at-will employment, mandatory vacation, strong unions, etc.
Basically, the unions know they’ll always be around. No Spanish government, to name a random European country, is going to institute right to work or try to defang unions completely. Sure, they might make it harder to strike or reform some of the labor laws, but you’ll see nothing like the US.
OTOH, if you’re a union leader in the US, you never know when after an election, your state is going to be Wisconsin’d. So, get what you can and provide for your union members until you can’t. When it’s basically all out war between both sides, don’t be surprised when one side takes all the spoils it can while it can, while on the other hand, even if labor and management don’t agree on everything in Europe, labor doesn’t believe the 1st goal of management is to destroy them so it’s easier to make a reasonable deal.
I agree, and would add that the same culture that likely influences union behaviour likely influences business behaviour. The USA has a remarkable history and continued practice of union-busting that seems a little absurd to outside eyes.
I do not get the impression that European labor unions are better behaved than the ones in the US.
There are immense differences between European countries (see the chart at the bottom). France has 19 times more strike days per 1000 employees than The Netherlands, for example.
A very interesting data point is that Sweden has a fraction of the strikes that Norway and Denmark have. So you can see that the differences are truly national, rather than regional.
PS. Note that the tendency to strike is not the only way in which union/employer relationships differ between countries.
American litigation? Is it harder to acquire land or rights to build on it because of increased court cases about “not in my backyard” and “the noise of your diggers kept my cat awake all night, I’m going to court to sue you for infliction of distress and animal cruelty”?
I have no idea. I know we got the EU to pay for all our infrastructure work, back in the balmy days of the Celtic Tiger, but maybe America is already so built up by our standards, it is more expensive and difficult than working from a green field situation where there was nothing there ever before.
We’re not talking about rural Ireland though, we’re talking about Paris, which is plenty built-up and obsessed with maintaining its historic character. And they still can build cheaper and faster than New York.
Then again, France is just strange all around. “It works in practice but nobody’s been able to make it work in theory.”
Maybe they have the streets cleared to work because all the lorry drivers on strike at the ferry ports are not moving through? 😉
I do think that part of it is that countries build expertise.
IIRC, Spain’s AVE (high-speed rail) lines have been dropping in per-km cost over the years (after inflation). Not by much, but Seville-Madrid in 1992 cost more per-km than Madrid-Barcelona in 2012, which seems to be largely attributable to developing their understanding of how to build railways.
There doesn’t seem to be much knowledge transfer within the US. I don’t get the impression that the Second Avenue Subway in New York was built by a team that built an LA or DC subway.
This seems like an plausible theory:
“Most American cities find themselves caught in the Growth Ponzi Scheme. We experience a modest, short term illusion of wealth in exchange for enormous, long term liabilities.” 
Does that argument generalize? Pensions seem like a logical place to look, particularly in the public sector – many cities today are struggling to pay overly generous pension plans. It can happen to corporations too – for example, GM before the financial crisis was talked about as a health insurance business that happens to make automobiles.
It also seems plausible that young companies (with mostly young workers) have an advantage. Maybe we should ask why things were so cheap back then?
Where else are there long-term liabilities that were put off?
No, that’s saying exactly the opposite. It’s saying that school costs a lot more than we admit, making the real problem even worse than Scott’s post. Also, school buildings are often not counted as school spending.
No, it’s saying that the low costs of the past were a lie.
Yeah, I guess that’s what the 4th paragraph is saying. But there is a lie today, as well, to the extent that defined benefit pensions still exist. Anyway, it would be simpler to say: much of the current budget is going to retired teachers. This is a simple, concrete statistic that ought to be available.
Isn’t that what happened in Greece?
The graph “Percentage Increase in Consumer Prices Since the First Quarter of 1978” has to be showing nominal prices, not inflation adjusted prices, since it shows CPI going up by 279%. So the figure for Tuition & Fees on that graph of 1,225% is also not inflation adjusted. So I don’t think your reference to inflation adjusted college cost dectupling or “Inflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year” can be correct.
I’m taking the “dectupling” claim from the site I link to there, not the graph. The site says it cost $1600 or so in 1980, compared to $12000 or so in 2008 (with presumably some further increases since then), which I think supports the dectuple claim. I think the site suggests higher increases than the graph because I’m focusing on tuition only (excluding room and board) in four-year universities.
I suspect many colleges are actually getting a fair amount of money cheating their students on rent. Not in the sense of breaking a contract, but banning student participation in the local rent market. This is the case at least at my school, where students are required to live in dorms for two years (after which point they are kicked out because my school admits too many people), despite the availability of housing that is both better and cheaper within a block or so of the campus. I suspect things are similar elsewhere, but do not know.
Anecdote against your anecdote: my university subsidizes housing by about 50% compared to prices in the neighborhood. (US west coast)
True in the case of most urban colleges, not in the case of those located away from cities.
But in cities, housing costs have also skyrocketed, and colleges usually own all the closest land, which is then underdeveloped compared to those neighborhoods (see: Columbia U) – which then itself increases rent prices nearby due to induced scarcity of nearby land.
My university did similar. Mandatory housing on campus first year, a lottery system to let you leave the second year, and then no housing thereafter. Meal plans were even mandatory the first year, at ~$10-$15 per meal.
The town had massively overpriced rent because they kept refusing zoning for new apartments (to “preserve the town’s character”, and no it wasn’t SF). Despite that, I got a 50% cost reduction when I escaped university housing. It was a pretty blatant cash grab, and they were talking about building more dorms to extract more revenue.
(A friend at another school got “mandatory the first three years, scarce lottery to leave the fourth year” despite living in a high-vacancy town where the dorms cost ~4 times the local average rent.)
“Lottery to leave” surprises me; before they ran out of space it was a lottery for upperclassmen to remain in the (heavily advertised) dorms. It was the case that all the dorms require meal plans, and even the ones with “ranges” don’t have freezers or enough of a stove to really make cooking practical.
I have this exact situation at my school, are you an Ohio State student as well?
The numbers at the NCES site are given in current dollars, not constant dollars. Good catch, Prof. F.
Anecdote about risk-tolerance (and fear of being sued): I’ve gone to climbing gyms in Israel, France, Switzerland, and the US. In the first three, I walked in and got asked if I knew how to belay. If I said no, they spent 10-20 minutes showing me how and let me go about it. If I said yes, they just let me go (or maybe just took a minute to make sure I really knew what I was doing).
When I went in the US, I had to sign two different liability forms, then either take an hour-long course (which cost extra and was only given in the afternoon) or on one occasion, pass a test (I got one detail wrong, which got me sent back to having to take the course), after I said I knew what I was doing.
To put this in perspective, climbing wall injuries are incredibly rare.
Where in the US was this?
So in my experience:
In CA (specifically Hangar 18 in and around LA): New people at the gym could get a twenty minute orientation on how to belay. The instruction was fairly competent. Afterwards they could belay as they would like. If they already knew how to belay, they just needed to demonstrate the skills which took about five minutes.
In NYC (pretty much all the gyms in NYC): people who already know how to belay can get a belay test which is free and takes a few minutes. New people have the option of taking a class on belaying which takes about an hour and costs $$$ but at least is fairly competently taught. “We” (me and my friends) suspect the class is there because it is a significant revenue source for the gyms. We don’t think it is related to any specific risks of being sued because belay tests are fairly lax (for top-rope climbing at least) and the liability waivers are fairly comprehensive (and because the class fees are such obvious motivators).
In TX (a while ago. I don’t remember the exact gym): people who already knew how to belay can take a belay test. Those who don’t get a very incompetently taught lessons though grigri’s are so difficult to mess up that it probably doesn’t matter in the gym itself.
I’ve never had to sign more than one liability waiver/gym. In some cases they let customers do it online in advance which is pretty convenient. Also in my experience top-rope belaying tests tend to be pretty lenient. Lead tests are usually more stringent but that makes sense since there are a lot more ways to hurt yourself leading. Still I’ve taken lead tests in quite a few gyms by now and they generally don’t take more than 10-15 minutes. For the most part is it just climbing a fairly easy (5.8-ish) route with one fall and then belaying somebody doing the same thing.
Connecticut and Minnesota. It’s good to hear my experiences were non-typical.
Where in France was this? My experience with climbing gyms in France is exactly the ridiculousness you describe for the US.
Grenoble, where they were really easy-going. I don’t know that much about French culture – maybe mountain towns are more easygoing? maybe it was just that one gym?
I had interestingly similar experiences in South Korean gyms. Slapping the equivalent of 5 bucks on the front counter was all you needed to lift in peace – towel and locker provided gratis. Monthly and yearly plans were, of course, discounted from the nightly rate. $20-30 a month was average, the high-end places with associated saunas went for around $50. When I got back to America after a few years, imagine my shock at the reams of paperwork – liability waivers, personal information, credit checks, even – that were required to sign up for (mandatory) yearly plans at pretty much every establishment I encountered. Suffice to say I was less shocked to see the domestics costs roughly doubly those provided by their Korean counterparts (at no noticeable difference in quality).
America is an expensive place to live. Anyone who has lived abroad can attest to that, and Americans who haven’t lived abroad don’t really understand how much more expensive everyday life is.
Declining population-level intelligence, executive function, conscientiousness, and so on due to pollution and dysgenics.
(You’d have to explain away the Flynn effect, mind.).
Yep, look at Evropa by way of contrast, clearly it’s steadfastly Aryan enough to resist the (((degeneracy))). /s
I think you’re safe with out the “/s” – Evropa puts your post soundly in parody territory and the (((triple parentheses))) push it over the edge. However, I’m not sure @psmith was actually talking about race, because he followed it up with evidence contrary to the proposed hypothesis. The only thing suspect to me was “dysgenics”.
Since you mention it.
“Declining population-level intelligence”
“You’d have to explain away the Flynn effect”
This is, what, racism as an article of faith? Where do you work? What’s your IQ?
Why is everything always about race with you people?
Anyway, it’s not like the Flynn effect is uncontroversial. Check out the results on backwards vs forwards digit span.
No idea, but I stayed in a Holiday Inn last night.
Well, consider the worries of “us people” put to rest.
I would welcome some reading on causal influence of pollution and dysgenics, if you had any lying around. Since you seemed to start with that.
Not OP, but Google seems to tell me that pollution impacts education and scores. I’m not sure whether this effect persists to adulthood or if it is temporary, but it worries me regardless. The only studies I could find about IQ from Google Scholar were on the side-effects of lead, which only really concerns places with lead pollution. However, if we have conclusive evidence that certain medications in childhood cause long-term problems, I don’t see why pollution, being more prevalent and less regulated, would lack complications. The question is whether these impact society or individuals.
Maybe China is doing studies on it? I recently heard news that they were considering green energy, which is not typical of them.
Dysgenics: I was thinking of the Iceland paper on genetic variants associated with educational attainment. Small effect size, to be sure, but it was in the news recently.
For pollution, I was thinking Robertson on atmospheric CO2: As the degree of acidosis increases, somnolence and confusion follow. But there are a million other things that you could follow up on here: lead, fatty acid composition, the effects of various common drugs on the fetus when taken by the mother during pregnancy, the correlation between living near a highway and dementia, the correlation between anticholinergic drug (e.g. Benadryl) use and dementia, epigenetic effects of artificial lighting….
the juxtaposition of those two sentences argues for him advancing a theorem but pointing out the obvious flaw, which if it could be fixed would make the theorem more powerful
not endorsing the theorem as-is
You’d also have to explain how it matters more than the clear and positive effect of unleaded gasoline (for example, that’s very likely to be a big part of the increase in minority test scores Scott talks about in paragraph 5)
Lead was a big deal, but the SAT at least was renormed. Maybe it’s chronic acidosis from increased atmospheric CO2 levels.
A theory: what if the underlying cause of this is technology? A question struck me in the shower. To whom does the money go? It isn’t higher wages. Some no doubt goes to our capitalist overlords. But I think a commenter above has the right idea. More people are involved in providing basically the same service. And why is that? Advances in technology — mainly in agriculture and manufacturing — have made the jobs “they” did before redundant. And so people “invent” jobs for themselves and others with only very marginal benefits in sections that are able to absorb this simply to be able to provide for themselves.
So basically an uncoordinated attempt at basic income and employment therapy payed for by increased productivity in other fields!
Have you read Lafargue? He wrote about the Right to be Lazy and prefigured exactly your concerns about invented labor.
Exactly what we see today with the bucket of crabs situation in healthcare and the wasteful competition in marketing/advertising
+1, though I would like to add, probably all those administrators in universities and schools have some benefits as well as costs. The people in universities relying on mental health services, for instance, would like to have a word with anyone who thinks we should go straight back to the seventies. Other potential benefits are more elusive. Some argue that advertising helps people “self-actualize”. Who can compare the pleasures of buying a Coke before and after indoctrination?
Most students have health insurance through their parents, and could seek out care in the community instead. I think in some places like Grinnell this might not be as possible, but there is no reason a school in a major city needs its own doctors.
I agree that there is no obvious reason to prefer that doctors in general come through schools (and an obvious reason against–that everyone and not just students should have doctors). Accessibility services however include programmes targetted to the situation of a university. This can include things that could have specialization-based cost-savings, as well as things that a university needs to directly engage with. Examples of both kinds: an embedded counsellor with expertise in the stresses of a given institution; a system of note-takers that allow people who cannot take their own notes to have notes; the kind of training that hopefully results in aegrotats or rewrites for people who have an anxiety attack in the middle of an exam. All that said, I am not sure what relevance the preferability of care being available more generally has, since my claim was only that mental health services in fact get used and benefit their users. Either these students do not have the access to care outside of their universities you suggest, or they are underusing their health insurance through ignorance in which case health insurance is now cheaper for everyone else.
This resonates with me emotionally, but it doesn’t make sense economically. Yes, unemployed people would like to have easy jobs doing pointless work, but why would anybody employ them? Shouldn’t some business get rid of all of the useless positions, charge half as much, and make a killing?
Many of those positions are in public institutions, though, where it’s very difficult to fire employees and supervisors rarely want to decrease the size of their department since it means a reduction in their personal prestige.
My wife recently took a job as an administrator at a private college, after having spent the last few years in the UC administration. The first thing she said when she came home on the first day was, “I cannot believe how busy people are. In my office at UC, many of the staff were idle for half the day or more, just killing time. Absolutely no one is here.”
So where’s the money going in private colleges?
Well, my theory would be that the “useless” positions mostly are to be found in the public sector or in other fields where perverse incentives makes it easier to hide or justify the extra cost (e.g. governance, education, health care). Also, in big companies where the average employee care more about themselves and their closest colleagues than the company bottom line.
Some testable predictions: (1) countries with lower technology levels and a bigger agricultural sector should have lower costs for education and health care (adjusted for possible confounders) and (2) a lot more people are employed in education and health care (although mainly in various supportive or administrative functions) now than fifty years ago.
I don’t have time to check myself because I need to be seen working at my sort of useless position in local governance…
I thought about that a lot and I believe the fallacy here is to think of companies as rational actors (“why would company hire this person?”). But, of course, they do not have personhood and do not do decisions, except via their employees. Once you look at it from perspective of individuals involved (shareholders, CEO, middle manager, the worker) the whole proposition suddenly makes much more sense.
I’ve written about it here: http://250bpm.com/blog:44
My analysis is that organizations gradually expand, simply because they is the constant creation of new positions and other positions becomes obsolete, but for the people in the organization it is much easier and more pleasant to hire someone for that new position than to remove the obsolete position. So you need to do forced cuts, just to stay at the same level. It is very painful to eliminate people, because you have to look people in the eye and basically tell them that they are useless. Shareholders tend to hire relatively sociopathic people to run organizations, because they are willing to do this.
In politics, making cuts has a different dynamic, because it is fairly easy to convince the public (who tend to be fairly badly informed) that cuts are permanent, rather than part of a normal process of expansion, followed by cuts, followed by expansion, followed by cuts, etc. Instead, they think that cuts means less money going to that field permanently, rather than that cuts are necessary to stay at the same level. Surveys in my country consistently find that voters believe that the cost growth is much less than it actually is. So it is easy for various interests groups to aggressively lobby to limit these cuts below what is necessary, because the voters are not just shareholders, but also consumers who think that they are being short-changed if cuts are made.
PS. Note that shareholders have their own blind spots, for example, they tend to believe that you can attract good CEO’s by paying more, so they participate in huge CEO wage inflation. So private businesses have their own cost disease.
Or external consultants who go through a ritual of considering every possible source of corporate malaise before invariably recommending redundancies.
I see that as the same thing, as those are hired by the CEO’s. Nothing is stopping the public sector from hiring George Clooney.
If this were true newly formed businesses would have huge edges over established businesses.
In some ways they do; Netflix destroyed Blockbuster, despite Blockbuster having a guaranteed income stream, an existing brand, and extensive contacts within the organizations selling movies. What they also had was an extensive organization optimized to renting physical disks and a lot of people who had a lot of experience in running storefronts. The people whose ricebowls would have been broken by a pivot to streaming were able to fend it off, until everybody’s ricebowl got broken when the company went under.
They often do. My favorite example is dell computers. Prior to dell, there was no direct way to buy computers from manufactures, you had to order them all through retailers because the retailers threatened to cut off any manufacturer who started doing direct sales, and that was too big a hit for any of them to take. Dell got started by only doing direct sales, and carved out a major niche for itself. Established institutions have resources and reputation, and those are big advantages, but they lack agility and generally have trouble adjusting to genuinely new modes of operation.
@baconbacon: Don’t be so quick to dismiss @Aapje’s point; small organizations do have a huge edge in operational efficiency, but they also have huge entrenchment disadvantages in many markets, and some businesses benefit from economies of scale that are difficult to service without growing an organization to the size where it becomes sclerotic.
The inability of that single principle to completely determine the winner in a market should not be used to dismiss it as an explanation for why costs could increase over time in sclerotized markets independent of service quality.
Yes, that is exactly why you see the constant waves of layoff after layoff at ongoing and profitable large companies. Between these layoffs, these firms are expanding, and the only way for them to compete with new startups is to constantly go through layoffs. The firms that don’t do this are ones who can’t compete and disappear after a while.
Well, this would exclude quickly growing firms, that may manage to direct this expansion to where the company needs more people. Although this is a very difficult task, so many growing firms end up with too many of some people and not enough of others — and these firms disappear too.
And this is why government services are usually so much less efficient than private industry. They gradually expand, don’t have the political ability to constantly lay off, and so have more people than they need. I think this explains much of Scott’s “cost disease,” since most of the increasing costs are with governments or those regulated so much that they act in the same manner as governments.
The things you mention mostly REDUCE costs, economies of scale, branding, operational efficiency. For excess useless employees to drive up costs they have to on net be a larger cost than the benefits of size, which automatically makes smaller companies that could avoid them more competitive in an open market. Fantasies about organizational bloat happening i n an open market always fail to include such considerations.
That makes no sense since small companies by definition do not benefit from the advantages that big companies have. As such, they need to be considerably more efficient than big companies to be competitive (or find a niche that the big company fails to serve).
You also forgot ‘corporate welfare’ as a factor, which benefits big business much more.
You are trying to artificially separate out two related things. “Big companies have advantages of size, but big companies have lots of low value employees”. Small companies don’t become big and then hire more employees, hiring more is part of the process of growing.
You can’t have it both ways*, either the small company has to hire lots of good people to become a big company and so gain returns from the economies of scale or small companies are hiring lots of bad (inefficient, whatever) people along the way and still becoming large companies.
*You can postulate that many of the people who were hired when the company was size X were good for a company of that size, but they are bad for a company of size 3X.
>The things you mention mostly REDUCE costs, economies of scale, branding, operational efficiency. For excess useless employees to drive up costs they have to on net be a larger cost than the benefits of size, which automatically makes smaller companies that could avoid them more competitive in an open market. Fantasies about organizational bloat happening i n an open market always fail to include such considerations.
Economies of scale are not a natural law of the universe. Diseconomies of scale are far more common. As organizations grow, overhead grows geometrically, not linearly. A small company can get by hiring people on an ad-hoc basis, but soon you need an HR person, then a whole HR division. And while those HR people do do a lot of the work that everyone else used to do, they also generate work. Once you have an HR person/division, they start writing rules that people in the other departments have to learn and comply with. You start needed meetings between HR and other departments to argue over those rules, and so on. There are circumstances where there are compensating economies of scale to make up for the inherent dis-economies, like large lot manufacturing, but they’re the exception not the rule. The advantage large companies have is not that they are more efficient, they rarely are, but that they have more resources to throw at the problem.
Hey, I was mocking Krugman’s Nobel prize before it was cool.
As far as it pertains to this discussion, you don’t get cost bloat this way. If you have advantages from size in one place, and disadvantages in others they will (in a competitive environment) have to balance out or on net be cost reducing. If they aren’t then a different sized company will have the opportunity to steal market share away.
My argument is that big companies are generally less cost efficient in operating efficiency, but more in other dimensions. Overall, the latter can mean that the big company out-competes the small companies, but this doesn’t magically make the big company superior in every way.
As a separate, but related argument, I also believe that some of the factors that benefit big companies are regulatory failures (such as corporate welfare that is biased to large companies) and homo economicus failures (like people favoring ‘brands’ over actually researching the quality of products).
When I was hired at my current job I was the only person in my position and was able to handle the work load easily, eventually I left. I was offered the job about a year later and decided to come back only to find that I was one of three people doing what I had been doing alone before. Oh and they restructured and the three of us have about half as much work as I had when I first started. I work for fortune 500 company that employees about half a million people, although I imagine the 2/3rd redundancy is probably not true in all departments.
The marketing arms race mentioned above is tempting to believe. It’s pretty much zero sum spending since you can’t really increase aggregate purchasing power of your consumers with better marketing. I have no idea if spending on this or employment in the sector has actually increased, though. It’s not pointless jobs, but the only reason the company gains from it is because everyone else is doing it, too. It’s not making their product any better or their production processes any more efficient.
Of course, public schools don’t market. Drug advertisements have always struck me as weird and totally pointless, though.
When your society is organized around the concept that everyone needs to either get a job or starve to death, it isn’t surprising that it finds mechanisms to preserve or create jobs even if they are strictly unnecessary. In some respects, our system is designed to treat waste as a feature and not a bug.
Consider that you aren’t necessarily looking at one company hiring three people to do the job of one; you could be looking at three *companies* simultaneously doing the job of one. This is a danger any time “employees required” doesn’t scale linearly with “customers served”, which I would expect to happen in many cases across different fields.
We could surely have drastically reduced health care costs by adopting some kind of socialized system along the European model, but that is politically impossible because all that wasted money is providing hundreds of thousands of (redundant) jobs in the private health care market. We ignore a social technology that would improve efficiency because its adoption would throw unacceptably many people out of work; it’s like Luddism but with far more political clout.
Also, don’t assume that all redundant jobs are easily identifiable as “easy jobs doing pointless work”. Most college lecturers probably don’t think of their jobs that way. but employing thousands of professors to deliver individual, yet nearly identical calculus lectures all around the country year after year after year has been an exercise in mind-boggling inefficiency ever since the invention of film. Of course, the system preserves their jobs nonetheless.
Using recorded lectures has been tried, and it is not job protection that keeps us coming back to the lecture model. I am not an expert on education, but from my own experience here are some disadvantages that come from recorded lectures:
1. It is hard to maintain consistency between the recorded lecturer and the other parts of the course if the lectures are static.
2. Recorded lectures lack the adaptability needed to answer questions the students may have.
3. This may be a bug in the way humans learn but it is hard to pay attention to a recorded person talking to someone other then you about something you don’t care about. In short, recorded lectures are boring, and people learn less from boring lectures.
I realize that it would be cheaper to use recordings instead of professors (or, more realistically, graduate students) to teach calculus classes. Nevertheless, there is also a cost of eliminating the human element that cannot simply be ignored if you want to do a complete analysis.
I do listen to a lot of video lectures, and find them very educational. One thing I have noticed is that production quality matters *immensely*. Presently, most of the materials available are from projects like MIT open courseware where the lectures are intended for live students and recorded as kind of an afterthought; that doesn’t work very well. You really have to make producing a high-quality video-lecture the first priority in order to have a product that’s easy to pay attention to and learn from. They work very well if done properly. I think the only reason more materials like this don’t exist is that people just haven’t made it a priority.
I certainly don’t propose that the human element can be completely eliminated, but I do think that lectures could be treated as a resource much like books, with human experts present for “office hours”.
I have long been puzzled by why lecturers were not replaced by books shortly after the invention of printing made books cheap. Video is just the latest incarnation of that puzzle.
To put forth a theory: humans are evolutionarily adapted to focus close attention to other humans because they are potential predators. For many people, it is easier to absorb information when that instinct is helping them to focus.
(I’m most definitely not one of them, and my guess is that many others here on SSC are not either. Give me a book any day.)
Somebody linked above a discussion regarding why lectures still exist when you can just record it and throw it on YouTube. That was in OT 49.5, so I’m going to push back on it here: I despise taking online classes. I learn much more easily and effectively when I can interact face-to-face with the instructor, even if it’s in a large lecture hall; I’m that guy who has no problems asking a question if I don’t understand something. If I’m stuck taking a class, I’m going to learn whatever I can from it, Goddammit. I learn a lot more in less time from a lecture than I do from reading a book.
It’s a lot harder even in a live webcast to ask for or get clarification than a live lecture. It’s obviously impossible in a recorded class. And, this shouldn’t be the case but it is, when I have scheduled lectures it keeps me on track in a way “read this book” or “just pull up this YouTube video” doesn’t.
It’d be great if I could be one of the many people here who learn best from sitting down with a book, but I’m not. I don’t think I’m uncommon.
I think most college classes are set up just that way, or at least at the higher levels of college — one reads the book to learn the subject, and then you go to the lecture for clarification. It is exactly those people who do learn best from the book that would find a recorded or on-line course useless. The point of going to class is to ask questions.
But it is enormously more expensive to learn stuff from listening to lecturers than it is to get the information from books. But maybe not so much if recorded or on-line. As I remember my college years, there were too many that depended on the teacher to learn the material rather than the books they had to read. But for the best learning, you need the experts to help you with questions. If you only have books and recorded or on-line classes, then you don’t need the college at all.
I can understand the advantage of interaction. But if a lecture is given to a class of several hundred, as university lectures quite often are, there isn’t much interaction with the individual student. You get to ask a question and have a response once every several lectures. You also get to hear the responses to questions by other students–but you would get that in a recording of the lecture as well.
On the other hand, a book has large advantages over a lecture. You can read parts you have trouble following slowly and several times, move quickly over what is obvious. The author can do a much more careful job of presenting and explaining his ideas than in a lecture. You get to read the best book on the subject ever written–if you are lucky you get to listen to the best lecturer in the field currently teaching in your school, possibly one of the thousand best currently teaching.
So why isn’t the optimal policy to entirely replace the lecture with reading, and provide the interaction in a different format with a much higher ratio of teachers to students, like the Oxbridge system with the lecture part replaced by books?
I have never heard of anyone rejecting socialized healthcare on the grounds that it preserves jobs. Typically the people against socializing things are also the people in favor of creative destruction. Have I misinterpreted you?
I suspect you *have* heard that, you maybe just didn’t realize what you were really hearing. When lobbyists of the private insurance market turned out in force to convince Obama that a single-payer option would be a disaster for American businesses, that is what was happening.
It’s not so much an argument people make as that the people who hold those jobs have unions/professional associations that will lobby like hell against any proposal that would result in their elimination.
That economic argument relies on a functioning competitive market.
If there is elite college A, that needs to charge high prices since part of what it offers is a status symbol, it then has the resources to pay people for useless work (especially if it can claim that this enhances its status somehow) or provide luxurious facilities, etc. A new entrant into the market couldn’t undercut it with cheaper tuition because it wont have the status, and besides for profit colleges are icky or something.
The market for subway tunnels isn’t exactly competitive either. Government selects a contractor based on a not entirely transparent or truthful process, and then the contractor builds 1 tunnel. For a truly competitive situation, you’d need many companies building many tunnels, then charging fees to cover their costs.
Alas the nonfungibility inherent in infrastructure makes it really difficult to adopt into a private market model.
Medical costs are such a mess that no real competition on price is possible, imo.
This is a very interesting thought. I often marvel at how much modern economic activity seems so…unnecessary. It’s an old trope to complain about the American consumer economy being based on fools shilling useless trinkets to each other, but there is some truth in it. 3 generations ago your Cat Psychologist would have been a farmhand and your cat a mouse hunting tool. Now there are so many fewer farmhands, and machinists, and foundry workers, etc etc etc. At a certain point, we might actually have more people than we need workers to provide our economy with the basics for healthy, happy, stable lives. What to do with the “surplus” population of workers? Cat Psychology! You get the point.
Ask yourself, if I woke up tomorrow and my entire profession ceased to exist, could civilization continue? I suspect that for many, many people the answer is yes. And how many consumer dollars are spent each quarter on those types of goods/services (that is, the more or less useless ones)?
Have you read Piketty?
How far back can you trace this trend?
Keynes writing in ’35 was also soon to be vindicated by the economic upheaval of WW2, which would shake down every country involved, force them to invest in infrastructure and use up savings, and then depopulate them a little, setting them all up for trente glorieuses to greater or lesser extents as their economies grew back into the space vacated.
Lots of centrist governments with a mandate to repair their countries and reward their electorates were in power, lots of effort was being made to reward electorates. So for a while https://en.wikipedia.org/wiki/Post–World_War_II_economic_expansion we all saw what looked like consistent, spectacular growth that all could share in. And then it abruptly stopped when finance decided it wasn’t going to be pushed around and captured legislatures in most western countries by the late ’70s.
“Cost disease” could also belie the increasing consideration of externalities, that I personally hope will one day turn into truly environmental full-cost accounting. You can’t graze your herds on a commons any more, you have to pay rent. You can’t chuck the waste in the harbor any more, you need to pay for treatment, sorting, and taxes on the municipality maintaining the sewage infrastructure. And so on. I hope that as more of the resources we collectively need are considered in transactions, the market will either start to behave itself or fall away entirely.
This is a fun and exciting collection of trends, but what little there is to link them just shows how far we have to go and how “cost disease” is a canard.
What happened in the 70s is an interesting topic. Prior to the Great Recession, Scott Sumner was focused on how different countries reacted in different ways. Denmark stood out as one that had shifted most toward neoliberalism (though New Zealand arguably rivalled it), hence naming his paper The Great Danes. The least neoliberal developed country, Greece serves as a sort of Goofus to Denmark’s Gallant, although much of southern europe is similar. Of course, Sumner is a neoliberal and you only have his word that he identified Greece as the least neoliberal back when it was booming rather than after it crashed. Someone else might come up with a different measure of statism/liberalism (and indeed Sumner modifies the Heritage measure by removing two components).
This also reminded me of another thing. Many economists, like Sumner, point out that the Greek government spends a large portion of its GDP even after austerity, whereas poorer eastern european countries spend less and are understandably reluctant to subsidize Greece. These economists generally think Greece should just change its system to fit within its new eastern-european sized budget, but just as there’s no obvious way to replace our currently expensive sectors with those of the past, I don’t think even a parliament full of neoliberals would know how to simply do that.
And I’m not willing to take his word. The only thing I’m interested in hearing from Sumner is an honest account of the political gymnastics he underwent to exclude market reformers, globalists, investors and the entire fossil fuel industry from “rent-seeking special interest groups”.
Follow the money. “Health care” costs more; “education” costs more. But those things each have line items, and the line items have line items, and so on fractally. At each point there are either new line items totaling more than the prior cost, or a large increase in item cost that has an explanation deeper in the numbers.
And I don’t believe that it actually costs as much as is written off to treat someone using the ER to come off of a bad trip.
I’m skeptical of the claim that corporate profits aren’t that high.
Corporate profits are hard to measure. Anti-inductively hard. If they can be measured, they can be taxed. Are we sure there aren’t large expenses paying mysterious Cayman Islands LLCs?
The capitalist class is doing really well. You’ve talked about the Great Widening of the Gap Between the Rich and the Poor, which seems to have started in the mid 70s, around the same time these prices all mysteriously turned upward. Maybe the one is funding the other.
If profits are successfully hidden from the government, they have to be successfully hidden from the balance sheet, showing up neither in dividends paid nor in assets. Which, especially in a US company would produce a stockholder revolt, hostile takeover, and purge of management. The very investor demand for quarterly profits that is regularly bemoaned as causing US corporations to be short-term thinkers prevents corporations from hiding their profits.
(The obvious exception would be the minority of companies that do persistently run small profits without rebellion. But Amazons are pretty rare.)
It is quite easy. Client X is an international retailer. Its goods are manufactured in Asia and sold in the Middle East through its Swiss affiliate. The American company buys the goods from the Asian supplier. It then sells the same goods at cost to its Swiss affiliate and ships them directly to the Distribution Center in Dubai. The Swiss party then sells the goods at a huge markup to the Middle Eastern subsidiary, who sells the goods at-cost to end consumers through the physical stores.
The Swiss subsidiary made all the money. The American company and Emirate subsidiary both operated at a huge loss. In the end the ecosystem of the corporation made a profit, but it will pay no taxes on that profit until the funds are repatriated, which they won’t be, as the money is used to fund expansion into the European market.
The publicly traded holding company that controls all three companies can now show a profit on its balance sheet without having any tax liability.
Switzerland has taxes too.
Edit. Also, I want to comment on “It is quite easy.” I am a tax accountant. There is a whole branch of corporate taxation that deals with transfer pricing. The IRS would be very unlikely to accept the scheme you outlined above, nor would the UAE tax authority. There are very complicated ways to achieve this by extremely smart people — hence the Apple profits of billions in an Irish sub that does essentially nothing. But “easy” isn’t the word to use.
…and the various tax authorities are always changing the code to account for anything that turns out to be too profitable. Hence the Double Irish going the way of the dodo.
No matter how smart you are, tax avoidance is always bargaining with Lord Vader.
Fair enough, “easy” is flippant, and I am certainly no expert. I work on the logistics side of these transactions, and I can tell you they are structured exactly as I have described. To what end aside from tax efficiency I cannot gather.
Aside from the issue mentioned already by SEE, I’ve seen some evidence that corporations can be equally vulnerable to cost disease as public institutions.
For example, since the 1980s CEO pay has quintupled despite the lack of any growth in profits or otherwise to justify this. Now this is probably going to result in far smaller effects on overall cost, but it still stands as a demonstration of how market failure can occur and result in large cost increases in these firms.
I would venture that many firms have seen huge increases in both revenues and costs so that when you adjust profit for inflation it hasn’t really changed at all, on average.
Bear in mind that a lot of this is competitive pressure. The employment market has changed a lot since the 1950s, and there’s no longer the expectation that the CEO of a major company is either a founder or “climbed the ladder” in the lifetime-employment model. Many CEOs (especially at large public companies, where most of the CEO pay inflation has come from) are recruited from outside, with top candidates commanding a bidding war for their services. In other words, the market dynamics for CEOs have changed from a buyer’s market to a seller’s market.
Cronyism between directors and CEOs is definitely a factor here as well — there are certainly plenty of examples of CEOs who absconded with gratuitous pay packages while their company burned to the ground around them — but don’t neglect the basic market fundamentals when looking at this issue. People were greedy and self-dealing fifty years ago as well; that hasn’t changed, so it behooves us to look at other factors first.
How much evidence is there that top outside CEOs actually do a better job than old-school “climbed-the-ladder” CEOs?
I believe that the main reason is that shareholders:
1. are incompetent at judging CEO quality (for which there are no good measures, so they can’t be blamed for this)
2. believe that the quality of the CEO is very important
3. believe that better performing CEO are attracted by above average salaries (AFAIK research shows that this is false)
The result is a permanent bidding war where shareholders try to always give above average compensation, since they cannot judge the value of the CEOs and thus use their price as a proxy. CEOs are Veblen goods.
And since CEO pay is irrelevant to the bottom line, there is no price pressure. A company pulling in a billion a year in revenue won’t care about saving a million a year on CEO pay.
Lots of studies on CEO pay. Most don’t seem to find that they are greatly overpaid.
@Cliff: what do you mean?
@Aapje: Completely agreed on all points.
from what I’ve heard most profits are due to industry gains rather than specific company actions
which would imply that the CEOs can’t be doing much, right?
CEOs seem to generally be governed by the norms of the CEO subculture, which makes them act very much alike. IMHO, those norms mostly suck. An interesting observation is that in my country, for some years, the CEOs that got the prize for best manager, tended to end up in in court and were reviled as destroyers of their company and/or societal wealth, some years later. So at least at the time, there seems to have been a very destructive ideal of what a CEO should do.
I believe that there are actually extremely capable managers, but I consider it very likely that most of these never get the job as it is so hard to distinguish the unconventionally good from the unconventionally bad. Choosing predictably mediocre is quite rational behavior for shareholders.
I would argue that this is why founders tend to be such good managers. They simply used their unconventionally good skills to never have to depend on people taking the risk to parachute them into a big organization. They simply build it up themselves. When they go, you tend to see a way different managerial style by the successor.
Take Steve Jobs, for example. Apple did great when he was in charge. When he left, Apple was run by conventional managers who did the normal managerial stuff, but had no vision. Cut some jobs, lower the production costs, do some R&D in the exact stuff you were doing and let them make gradual improvements. This kept the company going, but it stagnated. Then Jobs came back and boom: big growth, leadership in new markets. He clearly had something extra. If Jobs goes and things start going bad and he comes back and things pick up, then the rational conclusion is that he was a crucial factor.
But again, it is far from easy to find these people. It’s even harder to trust them to do their thing. One of the best ideas to do so is skunkworks projects, that are initiated from big companies, but act like startups. They don’t have the rules, inertia and such to make it hard to innovate, but if they come up with something good, you can push it into the parent company and take advantage of their production capabilities, their marketing machine, their brand, etc, etc.
Of course, Steve Jobs developed the Macintosh as a skunkworks project, because he was unafraid to cannibalize the existing to stay on top. As they say: be your own competition.
That is an easy thing to say, but it goes against the natural impulses of most people.
This leads to wonder if CEOs are particularly competent at judging CEO quality.
Do you have a source on that, and are you sure about the “or otherwise” part?
Payscale.com says the median pay for the job “CEO” is 170k. That’s less than surgeons make and I’m pretty sure it hasn’t quintupled after inflation.
I’m guessing you get the silly statistics by interpreting “CEO” to refer only to CEOs of BIG companies. So what is it you’re talking about, the top 100? top 300? Whatever stat you’re looking at, those salaries have gone up but so has the size of the companies – those high-paid CEOs are ultimately responsible for a vast increase in staff and shareholder value compared to the 1980s.
(And if you’re only talking about the salaries of a few hundred exceptional people, it seems about as unlikely to explain broader economic trends as would looking at the salaries of sports stars and talk show hosts.)
For example, since the 1980s CEO pay has quintupled despite the lack of any growth in profits or otherwise to justify this.
I’ve been told that the tax rules on corporations and CEO-pay-plus-company-provided-benefits changed greatly in the 1980s.
Thus resulting in much less of “company pays for CEO’s nice car and membership to expensive golf club”, and much more of “company pays CEO directly, and CEO pays for nice car and membership to expensive golf club”.
Is this true?
If so, I think it greatly affects the analysis of CEO pay increase.
Yes, the item you’re looking for is Public Law 99-514 section 142, part of the 1986 tax reform. It eliminated a number of tricks companies were using to game the tax system by claiming outsized prices for dining and entertainment expenses, sharply limited the types of expenses that could be claimed, and made general entertainment expenses only 80% deductible rather than 100%.
This largely ended the era of the company-provided car and country club membership for all but the most elite individuals at the wealthiest firms.
Greatly affects? How much of CEO pay was in cars and country clubs?
I have a feeling that these perks would have been more significant at the middle management level than C-level. It’s pretty hard to spend a significant proportion of C*O pay on a single car short of a Ferrari or Lambo (though a friend of mine has stories…), but an entry-level luxury car (think Buick in the ’80s, Lexus today) might otherwise have represented 20% or so of a middle manager’s take-home pay, adding up payments and maintenance. Add a couple client lunches a week at a fancy restaurant and you’re getting into some serious lifestyle improvements.
@Nornagest: Definitely the case. And yes, it makes for a noticeably different experience.
My last employer was one of the largest media companies in the world. As middle management in one of their engineering groups, I had a blank check for first-class business travel, meals and entertainment expenses, and so forth. When I had to spend a few months in Shanghai and Shenzhen, I was living out of 5 star international hotels the whole time and nobody batted an eye about the expense.
All of these things are obscenely wasteful, but man did it make it comfortable to travel. (Which was no doubt the point.) My current employer is just under 200 people, and travel here is considered a burden rather than a privilege: people fly coach, stay in economy hotels, have very limited per diems; it’s all focused on economy. But as a result, there’s a lot of resistance if the company wants to send you somewhere rather than immediate acquiescence.
I think once an organization gets big enough, paying for these kinds of luxuries becomes a way to reduce friction. Is the cost worth it? Probably not; it’s probably still mostly a case of people maximizing rents within their hierarchy.
But I do miss it. 🙂
When I look at all of the categories you’ve listed out — education, health care, infrastructure, housing — the notable common factor that leaps out at me is that all of them have massive distorting factors that prevent markets from working.
In primary and secondary education, health care and infrastructure, there’s a fundamental price insensitivity because the person consuming the service is not directly paying for it. This is extremely obvious in the case of primary education and infrastructure, paid for by diffuse taxpayers but consumed by individuals, but it’s also of course the case in most modern healthcare, which is paid for through comprehensive health insurance plans which decouple the costs of individual treatments from the fee being charged to the consumer. (For many people, this fee is being taken out of their paycheck in a completely invisible fashion, since it affected their salary negotiations in a way they were never aware of in the first place.)
In education and infrastructure, public employee unions provide a powerful impediment to effective government negotiation with the service providers, and as @cassander points out, wage data for public employees is only part of the picture; their benefits packages are luxurious compared to private sector employees, they work fewer hours, retire much earlier, and in many states, they can game the compensation system to drastically boost their pension relative to their career earnings (and the amount of money that was contributed to back that pension, hence one of the reasons for the current solvency crisis in public pension funds). In addition, their pensions are usually guaranteed by law, in contrast to private sector retirement plans which are dependent upon market performance and sufficient investment.
In infrastructure in particular, I would be very interested to see a productivity study of the typical US public infrastructure worker against the typical worker in, say, Korea. I’ve never worked construction, but anecdotally, friends who have had the experience have had very little good to say about the work ethic of many of the people involved in public works projects, in large part because of extremely generous union protections and benefits and a lack of pressure to perform (because the contracting companies are rarely on fixed-cost bids, and benefit heavily from “unforeseen” delays that allow them to bill for more labor). In short, there appears to be very little demand from the elected officials to get quality service for the taxpayer money, and in many cities (again, anecdotally) there seems to be a significant link between public works construction, organized crime and political kickbacks that are probably contributing to it.
In addition, we have significantly higher environmental assessment costs in the US, and in many states, private actors can sue to block public works projects, creating a nearly endless stream of delays, studies, re-studies, committee reports, hearings and so forth that add significant cost and time to any project. Some organizations even use this as a form of “greenmail” to extort bribes from developers to go away.
Moving on, higher education is price-insensitive because it is a status good more than anything else in the modern day. Many higher-end professional positions (Big Law, management consulting, executive leadership roles at large companies, etc) are completely closed to anyone without a sufficiently high-status degree from the Ivies or a tiny handful of other top-tier schools. Regular white-collar positions are almost all closed to anyone without a bachelor’s degree from some accredited school somewhere, and even there, status comes into play; a degree from your local state school is still likely to look a lot better to an employer than one from an online university or some random private college in the middle of nowhere that no one has heard of.
Combine this with a federal loan system which will provide an ever-increasing amount of money to allow students to pay whatever price the colleges demand, and you don’t need a fancy model; Econ 101 will predict uncapped price increases. No matter how terrible an investment a college degree is, the alternative is being frozen out of the white collar job market almost entirely; people will borrow whatever they have to. I doubt many people ever do the math to figure out what their lifetime earnings differential is; that’s not the point. The point is that if you’re raised in American middle or upper-middle class culture, you get a degree so you can be respectable and not one of those people, the uneducated louts. There’s no reason for the colleges not to raise money and spend it on whatever they want: they pay no price for doing so and will never have a shortage of applicants.
The best proposal I’ve ever seen for returning college costs to baseline is a very simple one: make “college degree” a prohibited employment discrimination category. Issue tests, require professional certifications, fine; but the generic sheepskin would no longer be something the employer could ask about. I would be willing to bet the impact on the private economy would be nearly zero but higher education costs would collapse in a decade. (Initially, of course, businesses would be looking for every extra-legal way they could get around the prohibition, if for no other reason than that the hiring mangers are so conditioned to think of the college degree as the dividing line between civilization and the savages. But as more people started to slip through without degrees, and people saw there wasn’t a sudden talent apocalypse, I bet this would change a lot.)
Lastly, housing price increases are up dramatically in the nationwide average, but this isn’t the whole story; you have to look at it by region. What becomes quickly apparent is that cities which allow constant development have seen modest increases, while cities with tightly-zoned and highly regulated development markets have seen massive increases. Laws that stymie NIMBYism and promote unrestricted development could probably solve this problem in a few years, at the cost of wailing and gnashing of teeth from existing landowners who quite like the massively inflated value of their property. A land value tax would also solve this problem virtually overnight, though of course with the attendant sob stories about how grandma can’t afford to stay in her $3 million cottage she bought in 1937.
In conclusion, none of these problems are intractable and while the specifics of exactly where the money is going are definitely still mysterious, there are some very large and obvious mechanisms that allow for them to grow unregulated. Addressing those seems like a good first step before doing any more invasive intervention, at least to me.
Re: NIMBYism: the weird thing is that allowing more development should raise the value of land to landowners. The people who ought to oppose unrestricted development are renters, who get the supposed benefits of living in under-developed neighbourhoods without forgoing the benefits of unrestricted development. And yet anecdotally I’m not seeing a huge correlation between NIMBYism and the proportion of people who own their homes. I think of Nevada and Texas as allowing unfettered growth, but just eyeballing it their home ownership rate doesn’t look super high: http://eyeonhousing.org/2015/03/vacancy-and-homeownership-rates-by-state-2014/
Re: education: price-insensitivity doesn’t work as a complete explanation if professor salaries aren’t going up. I mean, if you have extra money as a college administrator, this seems like the obvious thing to spend it on: it will make your school more prestigious and will make your co-workers like you.
Developing your own property makes it more valuable. Your neighbor developing his property makes your property less valuable.
I disagree entirely. The crappiest house on a nice block is always going to be worth more than the nicest house on a crappy block. See Detroit vs Brooklyn.
It will make your school more prestigious in the eyes of its *employees*, and probably overall about twenty years down the line. Right now, though, the thing that will make your school better in the eyes of your *customers* is a ten story dorm with a labyrinth of security gates and closet-sized rooms.
I would expect the opposite pattern. Rules keeping a lot of land off the market–80-90% of it in the Bay Area is the figure I have seen–mean that land where building is allowed is more valuable. And since it means there is less housing available, rents are higher, not lower.
The losers are tenants, commuters who have to live farther out, and owners of land that is not allowed to be developed.
I like many of these proscriptions. The College Degree discrimination especially.
It’s also politically viable from a “blame the rich white Ivy League Good-Old-Boy’s” perspective. As sad as it is we need some sort of them scapegoat to get enough motivation and public will to demand something good. Good can’t just be good. It has to be good by punishing the people responsible for the bad.
One suggestion I’ve heard — and why I think that even if you could somehow mute the outraged academics, you’d still never get college degree discrimination banned — is that the current system is the result of anti-discrimination law, and specifically of disparate impact doctrine.
Employers are no longer able to issue employment or promotion tests unless they can prove to the satisfaction of a court that the test is a “business necessity”; otherwise, if protected minority groups test lower and consequently get fewer opportunities than white men, the business can be sued for discrimination. (And they will, just due to class/wealth distribution and its attendant environmental effects; you don’t need to even consider HBD to know the outcome.)
But colleges. Well, colleges have an exemption to this rule. They get to filter people on admission through a battery of selective tests, and the more selective the college, the more filtering has been done — filtering the employer can’t legally do themselves. More than anything else, this explains the status quo to me and why businesses are willing to pay a premium for college graduates even for jobs that have nothing to do with their degree: they’re buying discrimination-lawsuit insulation by using an intermediary to filter job applicants.
I suspect if disparate impact doctrine were eliminated by the courts and the burden of proof returned on plaintiffs to show conscious discrimination, the premium on a college degree would steeply decline.
That’s a really interesting viewpoint. Companies are outsourcing discrimination to a privileged institution that’s still allowed to discriminate.
Here’s a supporting account from the opposite side of the process https://nick-black.com/dankwiki/index.php/A_dispatch_from_Terminus
My God, that was devastating.
It would be quite amusing (in a sad way) if the places where the opposition to discrimination is strongest are secretly the places where we outsourced discrimination to.
Trust me that’s not the 10% of it.
Part of the reason colleges consistently have strong antidescrimination movements is because (dark dirty secret) affirmative action doesn’t exist for the black kids it exist for the rich white kids who want to learn in a diverse environment, it’s real effect is to ensure the African American kids (because they’ve been admitted disproportionatly to more prestigious institutions than other kids) are the bottom 10% of their class and suffer all the negative effects thereally of.
It’s hilarious! The university genuinely is hurting racial minorities and the minorities know this so they protest, but what do they protest for? More of the hurtful policies!
The more you look at higher education the more you see it as the cruel joke it is.
That’s not a dark dirty secret, that’s because the court cases used to justify affirmative action at universities are based around the idea that diversity is good for the student body in general and not specifically to benefit black people. So everyone pretends so..
Employers are already banned, by SCOTUS ruling, from using college degrees as a filter, the same way that they use IQ tests. Griggs said that an employer couldn’t use them unless they could show that they were required for the job.
This hasn’t filtered down into practice, but it would just take a few good lawsuits to fix that. I outlined the grass-roots way to do this nearly two years ago: https://slatestarcodex.com/2015/06/06/against-tulip-subsidies/#comment-210334
I think in practice this stuff is super easy to get around though. What you’re banned from doing is requiring everyone take an IQ test and then explicitly using IQ as a filtering criteria.
I work for a fairly elite-level firm that does most of its recruiting on college campuses (at both the undergrad and graduate level). Candidates are encouraged to voluntarily report their SAT/GMAT/GRE scores (everyone with a decent score does). When reviewing resumes, recruiting will send us a “friendly reminder” that GMAT score is the most accurate predictor of long-term career success we have access to.
And what a coincidence! At the end of the day, most people who report high scores get interviews, and most people with lower scores don’t. And it’s all nice and legal.
Courts aren’t idiots. The fact that your company plays dangerously loose with the IQ test rules doesn’t mean they aren’t there.
There you go.
It really doesn’t matter if the law says you can’t use either tests or college degrees, if the law, as actually practiced, bans using the tests but not using the college degrees. The law will affect the market in the way it’s actually practiced.
(Also, I suspect that judges will be a lot more willing to agree that college degrees are a job qualification than passing an IQ test. “Everyone knows” that educated people do jobs better.)
If judges will want to go against the explicit text of Griggs, okay. But they would still be going against it.
Maybe I’m missing out on some important context and the courts have already changed their mind on degrees. But so far it seems like the case law is sitting right there, and it’s just that no one has bothered to exercise their rights to a job without a degree yet.
This thread just blows my mind. I never realized that US case law was this absurd.
The IQ test in the Griggs v. Duke Power was instituted as of the effective date of the Civil Rights Act of 1964. The supreme court wasn’t going to stand for that fuck-ery.
If you are not using IQ tests in an attempt to be racist, you probably are going to be OK.
the many examples to the contrary seem to indicate otherwise.
The NYPD fire department had some weird racial hiring issues
“The Fire Department is now 86% white, 9% Hispanic and 5% black, according to the department’s latest figures. By comparison, the NYPD is 52% white, 27% Hispanic and 16% black.”
>“The Fire Department is now 86% white, 9% Hispanic and 5% black, according to the department’s latest figures. By comparison, the NYPD is 52% white, 27% Hispanic and 16% black.”
Why would you assume that the FD should have the same racial makeup as the city, and that if it doesn’t, the only possible reason is racism?
Griggs was essentially thrown out by various SCOTUS cases in the late ’80’s. The Civil Rights Act of 1991 basically codified Griggs, so the disparate impact conclusions of Griggs came back. But I’m not sure if a judge could actually use conclusions drawn by Griggs as precedent in a current case. It might be a good thing if that were the case, to fight back against the irrational college premium, except that I hate the disparate impact standard, and I would hate to see it expanded.
Yes. I think of Griggs vs Duke as one the three worst SCOTUS cases ever decided (or at least within my lifetime). And all three were in the ’70’s. Our current SCOTUS could get a lot worse. One of the other two is Roe vs Wade (has anyone here read that case is full? It is logically incoherent all the way through). The last one I can’t remember the name, but was about religion in schools.
What about that one which said that growing crops for your own use violates government price controls under the commerce clause?
It doesn’t need to be a filter as long as it can be an attribute that can be considered. If I’m an employer, I don’t need to say “degree required” as long as a. I have more applicants than positions available, and b. Some significant portion of the applicants have degrees.
But I’m not entirely sure how we would go about getting rid of that, or if it would be a good idea. Even if a college does not offer any skills at all (education value of 0) the degree still semi certifies certain qualities like a. That person can delay gratification for years at a time while the work on a degree. b. They are likely coming from an educated family, thus have better genetics/culture/work ethic.
These are valuable signals to employers, who genuinely need some way to separate out their applicants, in a way that no test can qualify (to the best of my knowledge). Training an employee is expensive (more so indirectly than directly) and employees need to know what their chances of success are.
I suppose unpaid/very low paid internships are an alternative, but there are plenty of problems with that approach as well.
Fire department having radically different racial profiles from police departments seems weird to me. Maybe nepotism is a factor?
On the other hand it’s way easier to get into college as a NAM and harder as an Asian
Thanks for saying a lot of sensible things. I also suspect that a lot of this comes down to the problem of A spending B’s money on C, with no one involved having any incentive to question the costs too finely.
Re: College degree as protected employment category.
It seems to me like there should be an opportunity here. If college degrees are no longer a good signal of competence relative to their cost, why isn’t somebody gaining competitive ground by just ceasing to require them? IIRC, labor is the largest component of budget for nearly all industries. Hire people who on-average are similarly competent, but will take a lower pay rate because they don’t have six figures of loans to pay off.
Answering my own question, the IT industry seems to have already done this in some ways. I don’t have a degree and many others don’t either. OTOH, my partner works for the healthcare industry. Despite being hypercompetent, she’s having trouble advancing because of mandatory degree requirements. That seems like it should hurt her employer as much as her.
The savings are all indirect. If the employer hires someone without the degree, the savings go to the employee, not to the company. The company would have to underpay in order to recognize the savings.
In talent-starved fields, like programming, employers are perfectly willing to find anyone capable of doing the job.
But they canunderpay, because under current circumstances the person without the degree has a weaker bargaining position.
Agreed that talent starvation can have the same effect — but it still feels like somebody is leaving money on the table here.
I don’t think that most businesses have any problem finding literature or art history majors willing to accept the same pay. There are enough underemployed people with BA degrees that there is no real incentive to take a risk on a lower-information candidate with comparable experience.
>It seems to me like there should be an opportunity here. If college degrees are no longer a good signal of competence relative to their cost, why isn’t somebody gaining competitive ground by just ceasing to require them?
A college degree might have some value and still not be worth its cost. e.g. if the degree costs 100k, but provides 10k in value to an employer, it’s worth it for every employer to hire only grads but society as a whole comes out 90k behind.
One major difference is that IT is an industry where candidates can be easily and cheaply assessed with a much higher level of predictive quality than using a degree as a proxy. I can sit down with an interview candidate for an hour, walk them through a series of programming exercises, and learn far more about their suitability to the job than I can learn by perusing credentials on a resume. This does not, of course, tell me anything about their discipline, temperament or suitability to the soft skills of the job. But assessing the hard skills of a programmer or sysadmin is far, far easier than, say, assessing the competence of an attorney, project manager or public relations specialist, all professions that rely far more on soft skills for success.
If the tech industry was barred from giving screening tests in interviews, I suspect we’d fall back on “college degree” as a proxy just as hard as most other industries.
Yes, I wanted to write a comment like this.
Scott has cost graphs for a bunch of fields that are heavily distorted by government spending, regulation, etc.
What I’d like to see is cost graphs for fields which are the least influenced by the government. Whatever those might be. Perhaps restaurants? Apparel manufacture? Nail salons? Gyms? Scott does mention veterinary care as increasing a lot, which surprised me. That would bear further investigation.
I’ve always heard that veterinary care and plastic surgery (both significantly less regulated) have been decreasing in costs relative to standard health care. Which is probably worth confirming, if true.
Re: market distortions in veterinary medicine, I remember back when I was making the rounds looking at colleges around 2000 I interviewed with a professor who volunteered, apropos of not much, that vet schools were heavily cartel-ized. That is, they actively restrict the supply of new vets to engineer artificial scarcity. I don’t know that this would explain the slope of cost increases over time, but there you have it.
I’m having trouble finding any literature on this, but a quote from an article about a proposed vet school expansion in Texas:
“A lot of it has to deal with existing vet schools trying to keep people out,” Williams said. “They are similar to an oligopoly power. They are a cartel.”
There is also an increase in demand at play here. If there is an expensive procedure to be done on a family pet, many people will choose to euthanize the pet instead (regrettably, probably, but most will spend far less on their dogs medical care than their grandparents). As people are becoming more wealthy, suddenly many of those procedures are going from completely affordable, to affordable to the wellish off who are sufficiently attached to their pet.
Maybe veterinarians are like “how come those other surgeons are charging a lot for a basic vasectomy but when I do it on a dog suddenly people don’t want to pay more than 4 figures? I am a real doctor too!” and increase their prices.
Combine this with the trend of hipster millennials who have dogs instead of kids and for whom “no price is too high!” when fido needs his operation.
There was once a time when it would have been socially acceptable to simply let your dog die (or have them euthanized) rather than pay $5,000 for an operation. I’m guessing in many circles this is no longer the case.
Hell, I’ve got an aging in-law who’ll laugh at hipster millenials as much as the next old fart, but she’s paying $600 for her cat to get an echocardiogram, and depending on the test results will be entirely willing to pay for medication afterwards as long as they can supply it in some form other than pills because she does not want the hassle of making this cat take pills.
One of my parents’ cats is on fluoxetine (it was prescribed to prevent out-of-place urination, and appears to have done so, although the regression fallacy may be in play – I’m guessing placebo isn’t going on, since after all, cat) and it is, I am fairly certain, more expensive than generic fluoxetine would be if you got it prescribed to a human. It certainly is once the size of the dosage is taken into account.
Well, just for the sake of thoroughness… pissing on things is one of those classic “pay attention to me” signs pets do when they’re feeling neglected or stressed, so it’s conceivable (albeit, admittedly, not likely) that the act of getting King Grumplewumpkins to take his medication provides enough attention – or gives him enough of an outlet to express/actualize his pent-up stresses – to solve the problem even if said medication was just chalk pills.
You happened to remind me of a study I keep hoping will be debunked because it makes no freaking sense to me:
Rats Get Placebo Effect
Oh, rats can totally get the placebo effect, especially if the researchers are deliberately trying to induce it as they seem to have been doing in this study. Relevant article here – basically, if you spend some time teaching a rat to associate an unusual stimulus with a certain medical effect, like every time it gets to drink sugar water it gets a certain drug, it will then experience the placebo effect when given the stimulus without the drug. This can be done with humans too, e.g. giving a patient a weirdly flavored drink with their medication and then using the drink alone to stimulate a response.
@Synonym Seven/Doctor Mist:
Well, said cat gets plenty of attention, and I can’t think of any stressors that disappeared during that time. On the other hand, maybe there’s some kind of Clever Hans thing going on, and the pet notices that its owners are behaving differently?
Re: your comment on infrastructure costs:
Scott’s OP notes that several times more to build a subway in the US than it does in the UK or France. Both of which are countries that have been developed for a very long time (unlike Korea) and which have significant labor and environmental protections. I also have a hard time believing that US construction workers are significantly lazier than their west European counterparts.
You can find some interesting work on this problem this blog. I’m not enough of an expert to say exactly why this is – I’ve read some good analyses that blame the US government-contracting process, or weird regulations, mostly around safety, that we have that other countries don’t – but it’s clearly not just environmental or labor costs and there are probably multiple factors behind it.
New cars are often status goods. Do we see a similar dynamic with their prices?
You make a lot of points like:
But if this is true, why did prices start slow and got bigger over several decades? Why not just start big to start with? And why does this only happen in the USA lol?
It’s fun to say that markets do not work, people are stupid, etc etc. But why is that not true across time and across countries?
Well, in most countries, the answer for healthcare is “because the government imposes price controls and the US is subsidizing all the medical research for the entire world.”
Education is a harder one to answer. Let me throw out a few theories that are probably affecting it.
First, I suspect it has at least partly to do with our two-party system. The NEA is a large and extremely powerful lobby — one of the most powerful in the country — and in a two-party system, sizable voting blocs have the ability to play “kingmaker” between two nearly-evenly-matched parties and extort larger concessions from the side they join than they might otherwise be able to achieve in a system with more competitive options. The NEA’s support is crucial to the Democratic Party; without it, we would have a one-party system. (See the NRA for the reverse example on the Republican side.) So they can demand better facilities, better benefit packages, smaller class sizes, and so forth and get them. This produces less political resistance to increasing spending than you might otherwise have.
I think the answer for “why slow growth, why not just go big to start with?” is much easier: it’s the boiling-a-frog parable. If you shock the system with large, sudden increases in spending, voters will rebel at the polls and you’re likely to lose them. If you very slowly increase spending over many years, you can extract a lot more rents from the system without anyone having a clear picture of “who’s responsible”.
Now, that said, political lobbies and one-sided teachers’ unions aren’t a uniquely US problem. (The UK has a very similar situation.) So why the US in particular?
Probably the most interesting thing I note is that we’re the only rich large nation. We talk about countries like the UK and Japan as if they’re our equals, but the reality is that they have much smaller populations and institutions and actually significantly lower income. Here’s a scatter plot I threw together of the nations of the world by GNI (PPP) and population, excluding China and India because their populations dwarf the rest of the graph and they aren’t really relevant to this discussion since their per capita GNI is so low. Note what an extreme outlier the US is. (For anyone curious, the unlabeled outliers on the far right of the GNI axis are Kuwait and Qatar.)
If we hypothesize that:
A.) Organizations which grow larger become less efficient due to exponentially increasing coordination costs between all of their components and the inability of supervising entities to closely monitor all components for inefficiencies.
B.) Organizations will find ways to consume whatever resources are available to consume, as each individual in the organization attempts to maximize their own personal benefits from membership.
…then it suggests that the US both can and will garner “inefficiencies of scale” unprecedented for any other nation simply because we operate on a larger scale and have the resources available.
If you look at our spending on education in absolute terms, it’s staggeringly inefficient compared to other countries. But look at it in GDP adjusted terms and it’s actually middle of the pack for OECD countries. This suggests to me that our level of waste and inefficiency is at least in part a side effect of being able to tolerate it without the government going bankrupt.
Beyond that, I don’t really have any guesses as to what’s special about us. My original post was mostly pointing out that there are some obvious market deficiencies that can be corrected based solely on economic principles; reducing the known and obvious factors that impede markets from doing their efficiency-optimizing job might help flush out the less-obvious inefficiencies in our system.
Yeah, the commonalities in the sectors where “cost disease” exists are indirect payers and the level of regulation (including licensing), usually both.
Compare Lasik costs over time (which typically isn’t 3rd party paid) to health care. Compare housing cost changes in rural areas, or in metro Phoenix, Houston and Dallas to those in SF & NY. The numbers aren’t in because he hasn’t started yet, but if Elon Musk builds a tunnel below his own property, does anyone doubt it’ll cost way less than if the city did it?
What are the differences between those which magically make them not have the same massive cost increases? The answers are pretty obvious.
I will concede that regulations add a little to cost disease, but I am skeptical regarding your implication that it is the most significant factor.
Consider: As far as I can tell, food safety is regulated almost as heavily as medicine, and considerably more heavily then higher education. Nevertheless, the cost of food has not skyrocketed the way the other industries have.
The market for laser eye surgery actually fits my model quite well. Getting eye correction is nice, but as a society we do not feel it is essential. Lots of people wear glasses and are content. (I have a brother that could have the procedure done, but had declined because he “likes the way glasses frame [his] face.”) As a society we feel that going with the cheaper option (glasses) is valid, so we have not seen cost disease in this field.
If that were the case, one could not open a restaurant without a postgraduate degree in nutrition and food safety (equivalent to a compounding pharmacist), and Hormel would have had to spend a few hundred million dollars on human trials for safety and nutrition when they invented Spam.
I didn’t actually say regulation is the most significant factor, so to point out what you may have missed, I also wrote “indirect payers”. The point of my comment was that cost disease seems to be worst where there is a high level of both indirect payment and regulation. Increased costs are present when you just have one factor, but the places where you have both are obvious (to me) as even larger anomalies.
Health Care in general: Both
Lasik: Regulation, but not indirect payer
Subway Costs: Both
Housing: In some locations (where cost is worst), heavy regulation and some indirect payer. In others, less regulation and cost is better. Either way, cost hasn’t increased as badly as the other sectors with both present.
In terms of the comment on food safety regulations, a class someone can take and pass in one day isn’t quite equivalent… Perhaps you aren’t aware of how heavily medicine is regulated and licensed?
As a lawyer, I perceive your comments about med mail lawsuits as pure insurance company propaganda. Have we seen any change in defensive medicine in the tort reform states? The psych consult for depression seems better explained by medical systems set up to maximize revenue.
There is a book about this: David Engel, The Myth of the Litigious Society. Another factor could be insurance-company-induced-yet-unfounded fear of lawsuits. It seems unlikely that doctors carefully calibrate their referrals against a legal analysis of the likelihood of being sued.
I am a physician.
It is not just the malpractice concerns. I have never been sued.
Physicians routinely face other situations that produce fear. The kind of fear that keeps us awake at night.
In my case, I have had to face being reported to the state licensing board. It wasn’t because I had done substandard care. I reported a neglected child to protective services. An infant who had lost 10% of her body weight due to neglect by the dad’s family.
The parents were going through a divorce. The dad’s lawyer filed a lengthy complaint against me with the state board. Mostly, it contended that I shouldn’t have reported the case to protective services. The lawyer had to know that I am mandated by state law to do such reporting. It was clearly retaliation for my action.
Of course, the case against me was dismissed. But it took three years and thousands of dollars to clear my name.
I know many physicians who have the same sorts of nuisance complaints. It takes an emotional toll whenever things like this arise. I felt bullied and this case is one of many reasons that I am leaving primary care. I can live a better life without worrying every minute who will pop out and attack me. I dedicated my career to helping low income patients and I even spent years helping with the local charity clinic but the postives I have gotten are minimal compared to the emotional stress involved on a daily basis.
Good bye medicine forever.
It might not have been retaliation (though you’d know more than I) so much as wanting to be able to dispute the child endangerment charge in the divorce proceedings–the lawyer can point out that the doctor was later investigated for making that allegation, so it shouldn’t be considered in establishing custody, or something.
The lawyer can cite an investigation they brought into being as reason for dismissing the charge? That seems…improper.
I don’t know what they *can* do, but it seems like if the investigation was ongoing they could imply it had merit, or at least wasn’t thrown out.
An investigation might also help demonstrate the seriousness of the complaint. If the doctor made an unreasonable allegation, why wouldn’t the person complain?
This is interesting because the law mandates that I must report any suspected child abuse. It says that I am not to investigate any charges but if there is any suspicion, I must report or risk losing my license. My lawyer has had to defend physicians who did not report and then faced loss of their license.
The law also says that I am protected legally when I report in good faith.
Obviously, lawyers feel that this part of the law does not pertain to them.
In my case, the reporting was clearly vindictive. It was not a simple report. It had items in it that I think could have represented slander. I considered reporting the lawyer to the bar but I did not want to create an environment of escalating accusations. I did this to protect my peace of mind.
The important point I am making that seems entirely missed is how this impacts malpractice concerns.
I think every doctor probably knows someone in the profession that has had to deal with being reported to licensing boards over cases that have little or no merit. Like this case. These cases always drag on for years before being resolved. While they drag on, I have to report it when I do my annual license renewal and explain what is happening in the case just the same as I would have to if I had a malpractice case filed against me. Just being reported creates a blot on my record that does not go away until the the case is resolved and closed.
These cases get lumped emotionally into the same basket as being sued. The impact is that physicians look at everything they do as having the potential to backfire and they worry all the time about nuisance law suits.
So when studies only look at the number of lawsuits and defensive practice of medicine, they are missing the boat. They are not looking at all the ways that physicians face legal actions and they are not looking at how many of them are not justified in the end but are just the result of someone with an ax to grind.
Lawyers typically underestimate the emotional toll this places on practitioners. The person who filed a note to the board wasn’t even one of my patients but was the lawyer of the patient’s grandparents. How in the world can I protect myself against something like that? The years it took to resolve were very emotionally challenging. I feel that the lawyer was a bully who was trying to get back at me for my reporting and I have not seen any evidence that contradicts this. I had to supply medical records and several sets of testimony about my practice. I worried the entire time. Even though I felt I had done nothing wrong and my lawyer told me there was no problem with my records, I still worried that I could be found at fault for some obscure reason.
This sort of situation wears on physicians in practice and it is just one of many factors that lead me to retire early. I know many other physicians who are reacting similarly by early retirement, by restricting their practice and, sadly, by committing suicide. The medical profession has a high suicide rate compared to many other groups in the US.
Just idle curiosity, but what field are you going into instead? I’m kind of curious what other careers make for a smooth transition from medicine.
I have diversified.
I have a small farm to reduce food costs.
I have several small businesses that make money through the Internet and I am just starting to do consulting for other physicians who are looking for a way out.
I never had an enormous income since I was in primary care and I currently live in a house with a paid off mortgage. These things help a lot.
Have we seen any change in defensive medicine in the tort reform states?
Tort reform typically caps damages. This is irrelevant for the problem Scott discussed. Being sued for $50,000 or $500,000 is the same nightmare to the doctor.
A damage cap of $50k would prevent most claims from being brought in the first place
Damage caps reduce the number of suits. Being sued for $50,000 is not a nightmare to a doctor because it doesn’t happen–a plaintiff would not recoup their costs. Tort reforms also include more direct ways to reduce the likelihood of suits. In Texas for instance, insurance companies changed the definition of negligence, adopting “a ‘willful and wanton’ negligence standard — interpreted as intentionally harming the patient — for emergency care”.
I volunteer in EMS. I once committed a reportable medical error (medication error, no resultant harm). This involved me having my ability to practice suspended for several weeks, followed by an hour-long meeting with my partner, service director, medical director, EMS coordinator, and at least one other person. Followed by a “don’t do that again”. It’s one of the more dreadful things I’ve dealt with in the past few years. And this was for something that wasn’t life-altering. My livelyhood wouldn’t be impacted in any way.
Most of medical regulation is covered by civil (rather than criminal) statute. Ignoring claims for damages, there’s also the regulatory process. Having your license stripped isn’t something subject to criminal due process. It’s a professional board. Which means that the standards are lower. The stakes are also higher. If you have your license revoked you’re basically a fresh college graduate without any expertise trying to find a job.
I haven’t experienced either, but I’d prefer being a fresh college graduate without experience to a convicted criminal when job hunting.
Is the worst-case scenario there that you lose your EMS license, or can it effect your non-EMS related life? (I guess in principle you can be sued for malpractice, but how often does that actually happen, and is there some sort of
insurgenceinsurance to protect volunteers?)
A few things. This wasn’t a criminal issue (I suppose I could have been tried for practicing medicine without a license, but that wouldn’t have gone anywhere).
From memory from EMS class (yes, this is covered early in EMT class as a part of the standard curriculum), the basic requirements for a tort:
1) Duty to act.
2) Breach of duty.
4) Proximate cause between 2 & 3.
There was no harm. So elements 3 & 4 wouldn’t be met. Thus no suit in this case.
What about a bigger problem? Say I’d caused injury or death. Well, unless they can turn it into a criminal case by claiming that I intended to cause injury, a civil suit might arise. In that case, there’s be several people to point fingers at:
1) The ambulance service.
2) My medical director.
In order, the ambulance service is the usual target of EMS lawsuits. This is because they usually have large insurance policies, and much deeper pockets.
My medical director is the doctor who approves me providing delegated medical care under protocols. My general experience is that most of what doctors do these days is to act as a liability and paperwork sponge so that other people can get real work done. He has a big insurance policy. It would probably be hard to get actual money out of my medical director, but a lawsuit could probably make enough paperwork hassle that it would be worth settling.
Finally, me. As a volunteer, I’m protected by my State’s Good Samaritan laws, which tend to up the requirements to “gross negligence” to file suit. That’s a hard standard to achieve with someone who’s trying to do the right thing. Next, I was working as an agent of the ambulance service, so liability can be deflected that way. Finally, I carry an umbrella liability insurance policy to protect myself.
My full-time career is unrelated to healthcare, so even assuming my cert got yanked, it wouldn’t be detrimental.
It’s worth pointing out that tuition sticker price is a terrible measure of college costs. At public colleges, subsidies make tuition lower than the true cost, and at elite private colleges, the sticker price is often set very high and then discounted on an individual basis with need-based aid.
I suspect it’s more or less inevitable that the price of housing and services would increase in response to plummeting real costs of manufacturing goods. What else are people going to spend their money on? Something’s going to soak up the extra money; why not those things?
(Edit: I’m not saying these things couldn’t be delivered more efficiently, just that even if they were, we’d just buy more of them. Grad school could become the new new high school).
I’m also skeptical that standards of living are stagnating. Americans are living longer, in bigger and higher-quality housing, and spending more time in education and retirement and less in employment. And cheap manufactured goods. I’m aware of the real income statistics (although note that this is in part due to changing household composition, with fewer two-income households and more single-woman households), but given that on balance other statistics show a rising standard of living, it seems more likely to me that CPI calculations are not sufficiently capturing hedonic improvements.
Finally, you should never say, without qualification, that income inequality is increasing. Global income inequality has been declining for decades. More importantly, the global poor and middle class have seen great increases in absolute standard of living. This is a huge deal and horribly underreported. Yes, it’s unfortunate that standards of living for people in the 80th to 98th percentiles of the global income distribution have not risen as quickly as we might have liked, but overall, the trends of the past thirty years have been very good for most of the world.
Edit: Forgot to add that living paycheck-to-paycheck is something that people do in a wide variety of economic circumstances and probably has more to do with psychology than with economics. A lot of people will live the most expensive lifestyles their incomes allow and thus fail to save money regardless of their economic circumstances.
Jadagul wrote on tumblr about the college cost graph [1,2]. He shows graphs for net price as well as sticker price, e.g. the net price for private colleges only increased by 18% over twenty years, even though the sticker price increased by 94%.
if the net price is only 18%, doesn’t that imply that the extra 76% is being shouldered by taxpayers? And doesn’t that argue for taxpayers not wanting to shoulder the burden for people who don’t already receive subsidies and so forth? Or just shoulder an additional burden generally?
Hm, I guess that curve includes federal grants and tax credits, so part of it could be shouldered by taxpayers. But also a big difference between the sticker and net price is the institutional tuition discounts, which is just adjusting the price. It would be useful if someone could figure out how big the different components are.
I guess Scott and John Maynard Keynes gave a suggestion for something else: leisure.
Indeed, maybe you could tell a story here by comparing the U.S. and Europe? Since 1950, the number of working hours in Europe apparently declined between two and four times faster than in the U.S. Meanwhile, the growth of cost of education and health care was much faster in the U.S. than in Europe. It seems like European workers chose to spend their increased wealth by taking more vacation, while U.S. workers used it to bid up education and health care costs? ☺
Probably true, but looking at the other end, the debt students leave college with, doesn’t paint a very different picture, does it?
Sounds plausible. Im reminded of the way no one in Europe worries much about robots taking their jobs because everyone takes that to mean living a leisured existence supported by the state.
And then discounted on an individual basis with need-based aid.
This is classic price discrimination. They charge exactly as much as they can to each party. Salespeople in other fields would kill for this kind of pricing power.
What’s the correlation coefficient between the cost increases here, and the ratio of (#words in the federal tax code)/(real GDP)? Seems decently likely that both have the same underlying causes.
What happens if you randomly choose 3 service goods (e.g. plumbing, butlers, restaurants) and see how their prices have changed? If you look at the interval from 1900 to 1930, can you find 3 industries whose prices increased similarly to how healthcare, education and subway construction changed in the last 30 years?
From my experience, it seems very likely that most people (even lawyers) have close to zero ability to judge legal risk, and literally can’t tell the difference between probability = 40% and probability = 0.0004%. Texas has made it extraordinarily difficult to win med mal cases, but how many doctors in Texas know specifically what the legal standards are and how the economics work (https://www.youtube.com/watch?v=4lhuN-gpwqo)? Let alone other groups like nurses, hospital administrators, contractors, etc., who can all increase costs through fear of nearly nonexistent risks.
The extra money must be going somewhere
If prices have been increasing so dramatically while wages and profits have not then there must be another section of the american economy that has had a huge increase in profits without an increase in productivity
building contractors? milo yiannopoulosi (if that is the right plural) ? somewhere else?
Well, there’s finance, and to a lesser extent software. But “stagnant wages” doesn’t mean stagnant labor costs. If you hire more workers, your labor costs go up, even if you keep paying the same salary. That’s definitely part of what’s happening with education.
My sketchy memory of Classical Greek wants to pluralise it as ‘Yiannopouloi’, but I have no idea how that differs from Modern Greek.
i like yiannopouloi too
not just in the sense of pronunciation either 🙂
Read the story Positive Feedback by Christopher Anvil. When I read it originally, I was rolling my eyes because it was such a transparent allegory for the recent history of rising health care costs. Then I realized it was published in 1965!
(I hope that the copy of the story I linked to is authorized — what are the terms of Baen’s Free Library? — but I’m not sure. You can buy the collection “Prescription for Chaos” from Amazon or Baen.)
Sincere question: What social problem? Along the theme of going back to how things were 50 years ago: 50 years ago people were more than capable of having self sufficient adult lives straight out of highschool.
It’s true, the world has changed, and perhaps we need more and more-specialized educations to be productive now. On the flip side, there’s truth in the idea that our grandparents fought the nazis at age 16, while our 25 year olds today need safe spaces from them. People coming out of high school are fundamentally unequipped for life, and it didn’t used to be like this.
Yeah. But can we admit that’s at least partially down to increases in the complexity of life, rather than decreases in the quality of people? I know you admitted “the world has changed”, but I don’t think you’re giving enough credit to the hypothesis that carving out a space in the economy is just more a lot more difficult for a person of average intelligence now than it was then.
My father who left high school in New Zealand in the late sixties speaks of decently-paid unskilled work being so abundant that on multiple occasions he outright quit a job in order to take off with his band for a couple weeks, totally confident he could get another one instantly when he came back. That was unremarkable to him at the time but it sounds insane to modern ears.
What is the complexity of life that a college degree would help anyone with? (For that matter, it strikes me that “getting a college degree” is one of the most complicated things you can do already.)
In my experience, it has really helped in being able to deal with bureaucracy. Or at least not getting as frustrated when dealing with it.
Of course, this is not what university is meant to do, but having some experience of it from an institution with a few thousand or tens of thousands of people under its umbrella is good if you have to deal with a larger institution than that, like some level of government or other.
“It would be nice if everyone had the option of going to college, without needing to invest more than the time needed to complete their degree.”
Not every social problem is Nazis-levels of serious. That doesn’t mean we can’t consider them and discuss possible solutions. Also, as I understand it, the current fashion among 24-year-olds is in fact Nazi-punching, so there may yet be hope for the youth of the nation, or something.
More accurately, the fashion is to call people they disagree with Nazis, then punch them. Somewhat less noble.
well I’ve seen no evidence of that. But the most prominent punchees this year are the people who were loudly shouting “heil trump, heil victory”. Surely that can function as an admission of guilt?
Here, have some evidence.
Specifically, the second video, which apparently was taken in Berkeley right after a group of Black Bloc was interrupted whilst attacking one fleeing man in a Trump hat. The word “Nazi” is deployed about 20 seconds in. Was either Milo or any of the attendees who got attacked in the streets shouting “heil” to anything?
Let’s be clear. Eyeballfrog’s implication was that “the fashion is to call people they disagree with Nazis [more or less only on the basis that they disagree]”. Your example might not have involved heils, might not have been a “most prominent punchee” either, but in any case is one where people have reasons beyond simple disagreement for calling them Nazis. Specifically, I would hazard, Yiannopoulos’s stated views on Muslim people. Whether they are good reasons though is a can of worms.
Joe calls Bob a Nazi and punches him.
Scenario 1: Joe is well aware Bob is not a Nazi, but Bob disagrees with Joe on taxes or something, so Joe has cynically decided to call him one so as to win acclaim for punching him.
Scenario 2: Joe is vaguely aware that Bob might not actually support genocide or the abolition of democracy, but Bob supports conservative political policies Joe thinks are bad, and Nazis are bad, so he figures it’s a good idea to call Bob a Nazi and punch him in the course of demonstrating against conservative policies.
Scenario 3: Joe genuinely believes Bob supports genocide and/or the abolition of democracy (though this belief may be based on exaggeration and outright lies) and therefore calling Bob a Nazi and punching him is practically a moral duty.
Let’s be clear. Are you in fact asserting that eyeballfrog was only referring to Scenario 1, whereas the Berkeley riots were a mixture of 2 and 3 and therefore totally different and undeserving of condemnation? Because it seems highly implausible to me that the average person who’d condemn “calling people you disagree with Nazis, and then punching them” would draw a significant moral distinction between those three scenarios.
I made the assertion about eyeballfrog’s implication on the basis that it was meant to correct what Gazeboist said, which is that the fashion is Nazi-punching. I think eyeballfrog meant, not Nazis, rather people they disagree with are punched. So I think eyeballfrog was referring to something like your scenario 1 and 2. I would agree that these are, not morally indistinguishable, but both wrong, and would even throw in something like scenario 3 if you add in the assumption that the belief in question is unreasonable. (In general, people are less/not culpable for a wrong act if they reasonably believed it wasn’t wrong, and reasonable beliefs are sometimes based on exaggeration and misinformation). That said, you have left out the most important scenario of all:
Scenario 4: Joe believes Bob is a Nazi but has a different definition from yours.
Isn’t that what this conversation is really about? There is a meme going around that Nazism is not all about the genocides and immediate abolition of democracy. In fact, you may have seen it not two weeks ago on this very site (part VII). The fashion isn’t to punch someone because they are a Hitler’s-birthday-celebrating Nazi, but because you believe the movements they materially support lead to break down in the same ways as Nazism did (unless you punch them). I said before, actually evaluating the correctness of this belief and whether that makes punching them right is a can of worms. But I think there is more to the whole Nazi-punching thing than is captured in your scenarios or eyeballfrog’s comment, and that how much of your scenarios vs my scenario 4 is going on is unclear.
I will note that an awful lot of arguments surrounding the Social Justice movement seem to devolve into “we have our own definitions for words which do not match the dictionary but are TOTALLY ACCURATE”.
I would propose that the reason one would call someone a Nazi is not that they have evolved a higher concept of language, but that they want the cheap rhetorical power of associating their opponents with genocide without having to make a reasoned argument for why the opponents’ policies will actually cause genocide.
I agree that the important issue is not what meaning for Nazi you will find in a dictionary, or anyone’s claim to possess a privileged meaning, but that doesn’t mean the rhetorical power of “Nazi” has to be a stand-in for reasoned argument. You can have a reasoned argument for worrying about fascist tendencies among certain groups, and then try to use the word Nazi to make people think about the full implications of those tendencies–specifically, where they lead. This is not a motte-and-bailey thing about semantics: if you buy the arguments that contemporary Trumpism could lead to fascism in the same way actual-Hitler-Nazism did, using the word Nazi makes (non-cheap) rhetorical sense.
From the above SCC link: “The cause for concern isn’t that anyone you can see on TV today is plotting a Fourth Reich. It’s that some common factor causes people who start out as only moderately objectionable to predictably become something much worse. And modern populists share a suspicious number of characteristics and policies with their WWII-era fascist analogues…, and one can rightly be afraid that they’re drawing from the same underlying natural kind.”
Well, Joe has the right to define “Nazi” however he wants, but if Bob wants to go to a talk by a Breitbart guy and finds a group of Joes outside the venue punching anyone who looks like they’re trying to attend, Bob’s hardly on thin rhetorical ice if he suggests Joe is not quite the noble defender of freedom he claims to be.
Do you mean because the speech of “a Breitbart guy” is not necessarily that bad (with which I would immediately agree) or because there is no kind of speech the good response to which is de-platforming or violence, or because Joe’s only the audience and not the speaker, or something else? To be clear (since I’m all about that) my working model for “Nazi”-punchers is that they think there are kinds of speech that we should discourage because it’s dangerous, this includes not only “fire in the theatre” speech but also certain political speech (particularly if it seems to reject the very basis of speech), and that since talking louder has not worked to discourage such speech in the past we should sometimes resort to violence. Probably Bob would not think of Joe as a defender of freedom after being punched, but the question for the Nazi-puncher is how many Bobs would think twice next time before attending a similar speech event. I am trying to explain why I think the current fashion for Nazi-punching is not just punching people you happen to disagree with.
To me this is just the same as a right-winger arguing that it’s okay to call all Democrats “Stalinists” because hey, they share similar characteristics and sure do advocate some of the same policies. Sure, no one’s saying that Nancy Pelosi is building gulags right now, but it’s useful to invoke “Stalin” to make people think about those tendencies and where they might lead.
…or maybe we could just acknowledge that hyperbolic comparisons to evil historical figures should be reserved for people who are actually doing historically-evil things?
Everyone can spin a clear tale of how their enemies become demonic hell-lords destroying the world and enslaving the populace. That you can see how it could happen doesn’t make it likely to happen. And no one who makes these kinds of doomsday predictions ever seems willing to put their money where their mouth is and accept actual consequences for their predictions (financial or otherwise). So I am left to conclude that it’s just a combination of virtue signaling and outgroup-bashing, and the excuses for why it’s “justified” are just rationalizations to engage in activities which feel emotionally satisfying even if they’re corrosive and toxic to discourse.
I don’t disagree. A good reason to call the person you are punching a Nazi is to signal to others that and why the thing you are doing is virtuous; unless you yourself are a Nazi, you are literally outgroup bashing; punching a Nazi is likely to be emotionally satisfying for some sorts of people; and it being toxic to discourse (with Nazis) is kind of the point. There was some moment before the Holocaust when people should have started punching Nazis, and they didn’t. Right now people are making specific historical comparisons to interbellum Germany (example) in order to figure out whether we’re at the equivalent point. Most people don’t think we’re there, but it is wrong to say the people who do simply disagree with their opponents. There are specific and troubling parallels to Nazism and the more general race-tinged authoritarianism it has come to metonymize in relatively large parts of current politics. Since some think there is a rising tide of leftist back-lash against Trump, I would certainly be interested to hear whether it has any historical parallels in movements that resulted in some kind of totalitarianism.
I that probably the reason we differ is that I don’t see a broad band of gradation between “discourse” and “war”. In my mind, once you have determined that your opponent cannot be reasoned with because they are not mistaken but evil, you neutralize them with extreme prejudice. If they are not evil enough to pull out the rifles and start shooting, they are not evil enough to punch, either.
What I see “Nazi-punchers” as really doing is engaging in a low level of violence that they know they can perform without consequences. They are not trying to accomplish any actual practical ends — which is to say, they are not fighting evil. They are engaging in violence to make themselves feel superior and righteous, and doing it against individuals on the opposing side who they believe cannot or will not fight back for whatever reason. (You will note that “Nazi-punchers” seem most inclined to mob beatings of lone individuals or hit and run punching/pepper spraying, not actual fights.)
This isn’t a moral stand; it’s petty thuggery and vandalism cloaking itself in a cause of righteousness as an excuse. And it has more in common with the similarly-pathetic thugs behind the Kristallnacht than it does with anyone who ever fought real fascists.
Hm… That is certainly a place we differ. If I concluded that someone were evil, I would look for pragmatic opportunities for non-violent or less-violent means to stop their evil plans, both because I think violence is to be avoided regardless of its target and because I might be wrong about them. That said, I was really really just here to say I think “calling people they disagree with Nazis” is an unhelpful simplification. You’ve now presented a much more nuanced story that I think almost certainly describes some Nazi-punchers. I do think though part of the story here, even for those who latch onto the term Nazi for relatively bad reasons, is that recent events have made the Nazi descriptor culturally salient in a way it hasn’t been. People are talking about it, and they have reasons, and so the term becomes at hand. I do not have data, but I am guessing that people employing black bloc tactics fifteen years ago said Nazi less, and I think that the root of the heightened cultural prominence of “Nazi” is actual parallels even if it has grown in strange directions.
Currently, Amish manage to do quite well with an eighth grade education. They are successful farmers, using much, although not all, modern agricultural technology, and successful small scale entrepreneurs.
Yes, utilizing untold thousands of unpaid man-hours in labor from their children and other family members to undercut more expensive producers of various specialty crop (who have to, you know, pay wages to their employees). In Ohio, for example, they basically own the market for organic tomatoes, to such an extent that an Ohio State Extension horticulturalist advised against going into the business, because you will never be able to compete with them on price or quality. A family of 9: competitive advantage or market distortion?
It’s almost certainly true that the average Amish works more hours per day than the average non-Amish. But so far as the resulting output, that whole family is producing enough to support a reasonably good modern standard of living minus a few modern conveniences, mainly automobiles and in house telephones, that they don’t want. And they are doing it with a family labor force all of which has at most an eighth grade education.
Further, they are doing it despite some restrictions that raise production cost, such as horse drawn vehicles and no connection to the power grid.
Naive question on the Amish (just in case you happen to know): Do they read books for knowledge or entertainment? Just the bible? It would be interesting and surprising to me if they kept up with the outside world’s culture/entertainment/science by reading material from off the farms.
Competing on who works the most leads to a race to the bottom where everyone works maximally and has minimal leisure time. The Amish are basically saying “we don’t care about leisure time” and using that to undercut non-Amish who do care about leisure time.
In one sense, this is just the market at work, but I’m not libertarian to that extent.
You could also compare what the Amish are doing to dumping of subsidized goods, except the goods are hours.
I am not convinced looking at production advantages is the right way to consider the Amish.
As far as I understand, the Amish put great store in being self sufficient and they try to supply each other rather then get things from outside suppliers. As a result, they only need to have a comparatively small income from outsiders to balance payments.
From basic economics, if two trading partners have a shared currency and party A is bad at producing exports then the value of that currency will be fairly high for party A. Eventually, party A can accept lower and lower wages and still maintain a similar quality of life. Eventually, the exports from A will be so cheap that party B will start buying from A, and exports from B will be so expensive that A could not buy any of them. Thereby, balance of payments is restored.
Thus from the outside, they may sell a pastry for $.50 (compared to a $1.50 pastry from your local food-mart) and we think “They must be efficient”, not realizing that the pastry took ten times as many man hours to make as the supermarket pastry of the same quality. (Okay, the Amish pastry is probably of higher quality. I remember that it was a good pastry.)
My speculation here is not based on anything more then my memory of a few interactions I had with these people I had more then a decade ago, but I haven’t seen any data to disprove it.
Does anyone have such data?
I’m not sure what you mean by “undercut.” If the only difference between A and B is that A works ten hours a day and B works eight hours a day, we would expect A to make about 25% more income than B. Why would that seem to you to be a problem or unfair?
The Amish clearly have some leisure, but they value work and, I think, do more of it than most other Americans. They also value their social structure and believe that some modern technologies would erode it, so choose not to use those technologies. The first choice raises their income, the second lowers it. I don’t see anything problematic about either.
They certainly read some things for knowledge or entertainment–there are several publications by and for Amish, largely, I gather, about what is going on in the Amish world. My impression is that they read books with what they see as useful information. They use a good deal of modern agricultural technology, although not all, so would probably value relevant texts. Less likely to be interested in what they would see as useless knowledge.
@Jiro – “In one sense, this is just the market at work, but I’m not libertarian to that extent.”
…I’m sure no matter what one’s political persuasion, there’s always horror to be found in the SSC comments. For me, it’s these threads on the Amish. You are saying that running a farm with your kin should be prohibited?
I’m not concerned that A is making too much income; I’m concerned that this is pricing B out of the market. Imagine if the Amish were running steel plants at a loss solely so that, for religious reasons, they could sell steel for less than it cost to make it. This would result in non-Amish steel plants going out of business, and the more steel the Amish dump, the more non-Amish would be out of a job. And since they have religious reasons for dumping steel, the usual libertarian argument–that it isn’t possible to dump steel and remain profitable–doesn’t apply. They can dump the steel forever and use donations to keep their plants operating.
It’s the same thing, using farms instead of steel, and donating their labor rather than donating money to keep the Amish steel plants afloat.
I think that doing so should be prohibited when the kin are used to run the farm instead of giving them an education. If possible, I also think it should be prohibited when the kin are used to run the farm because, lacking an education in the past, their ability to do things other than help on the farm is sharply curtailed. (Particularly when this is deliberate policy on the part of the Amish to ensure that the next generation can’t leave).
If they can avoid that, they can be left alone, but I think that the fact that they are doing this is harmful to others and should not be praised. Like libertarians, I don’t assume that “bad” automatically means “should be prohibited”.
I’m not sure what you mean here…it’s not possible to run a steel mill below cost for long periods, because as soon as your bank accounts run dry you stop paying your bills and your suppliers will stop sending you inputs.
Same thing with farming…unless the Amish have massive savings I was previously unaware of, they’re going to go bankrupt in short order if they don’t recoup their costs.
In the hypothetical, the Amish can run a steel mill at below cost because, for religious reasons, they donate money to the steel mill, so the steel mill’s bank accounts never run dry.
In the actual situation, they are donating labor to farms for religious reasons instead, so that the cost of running the farm is below the cost of running a non-Amish farm that doesn’t benefit from donated labor.
I don’t see what that has to do with Amish choosing to work more hours, but I also don’t see what’s wrong with it for anyone but the Amish. In your story they are generously providing the rest of us with steel at below what it costs us to make steel. Do you also feel that they would be hurting us if they just happened to be better at making steel than we were? If not, what’s the difference?
I also don’t see why, if a foreign government subsidizes exports to the U.S., we should complain at their generosity.
This whole argument seems to assume some sort of absolute advantage model of trade, along with the mercantilist assumption that exporting is good and importing bad.
That’s a considerable exaggeration. When other farmers stopped using horses the Amish had a problem, since nobody was producing the equipment they needed for horse-drawn agriculture. For a while they made do by buying second hand and repairing, but eventually they developed their own little industry of producing stuff for their own use. And Amish, like some other people, like to eat vegetables they grow themselves.
But Amish producers of products such as milk routinely sell to the same middle men as non-Amish, and much of what they buy comes from non-Amish. Amish taxicabs, local non-Amish who provide transport to Amish, are a common pattern.
As best I can tell, literal self-sufficiency has never been a part of their doctrine. What is part of their doctrine is avoiding entanglement with the world. Working as an employee of a non-Amish employer is permitted, even common in some areas, but disfavored. I think partnership with a non-Amish would be forbidden by the ordnung of most or all Old Order Amish congregations. But buying from and selling to non-Amish isn’t an issue.
It’s basically the same thing, except that they’re dumping farm-labor-hours instead of steel.
If they were actually better at making steel, ithey would be causing technological unemployment. I believe that technological unemployment is a real problem which can be serious, so yes they would be hurting someone.
Having a world where nobody advances in technology is seriously bad, especially in the long term, so the overall effect of making steel cheaper is good, because it balances the benefit of technological progress against the harm to people who become unemployed. The harm is still there, however, and in other situations where the balance is different, may be part of an overall harmful effect.
Unemployment is unemployment. If the Amish or some Android or an Alien civilization produce extremely cheap steel obviously that would bid down the wages of people on that industry and people would choose not to work there. But real wages everywhere else would raise, since everything you buy made from steel would be cheaper.
Survivorship bias. Those who were not equipped died, in spades. We only see today the few that were equipped for whatever prior reason or lucky enough.
Servicemen in the United States Army and Navy suffered a roughly 2.5% mortality rate during World War II. The Merchant Marine was a bit more dangerous at 4.3%. Even the Soviet army only managed 20% mortality during WW2. So, not a significant factor I think.
Not only that, the places with the highest mortality were generally the ones with the least individual control over their survival. Everyone’s submarine services had far and away the highest mortality rates. Yet it’s really hard to see how the capability of an individual enlisted submariner is going to make a big difference to his life or death. The capability of his captain is orders of magnitude more important, and luck more important still.
(The British Merchant Navy was close to 20%, IIRC, but the same applies to them.)
My grandfather was a newly-minted US Navy submarine lieutenant in WW2. The stories he had to tell about the experience were bone-chilling.
He only served for 4 years, and spent the next 40 years making his fortune in private business. But to his dying day, his identity was that of a Naval officer first and foremost. It’s probably the majority of the reason my father became career Navy.
It’s fascinating to me how life-defining experiences like that can become.
Nitpick: Red Army mortality was a bit higher than 25% when POWs who died in German captivity are counted. German military mortality, likewise, a tad over 25%.
I tend to blame this on the transition of schools from being, well, whatever they used to be, into something like a cross between a daycare and a padded-room prison. The reason our adults seem like children is because they’ve never been allowed to be adults.
Our social plan, at least within my lifetime, has been “keep the kids safe and out of the way until they’re ready to take care of themselves.” But the self-sufficiency function is not “N years of being alive”. It’s “N years of being forced to be self-sufficient.” You don’t let people out of the playpen when they’re ready; you let them out (or they break out), and then they become ready.
Everything we stick an age limit on has this character, which is why people just above the age limit for any given vice tend to be retarded about it. Raising the limits will not help with this, it will just keep people retarded for longer.
[edit: Something I haven’t thought of before: Okay, take away the padded walls and you’ll get adults who actually know how to be adults, but you’ll also get the human wreckage of whatever slice of the population really need those padded walls. I think there might be a tradeoff here that has no good answer.]
Treat people differently based on their needs, as individuals.
We get the human wreckage right now; we just get it at whatever age where we choose to take away the walls. Nobody lives in a padded room forever (as much as some campus activists might want to).
I think the bigger issue is that if we remove all the padding suddenly, we’ll get the simultaneous wreckage of every individual who would have wrecked when they got out of the rubber room. (e.g. removing the drinking age is going to cause the sudden alcohol overdose of all of the 12 to 20 year olds who would have OD’d at age 21.) No politician would ever be willing to take the heat for that.
Steadily reduce the amount of padding? You have about ten years* to work with here, there’s no need to expect everyone to magically jump from preteen to young professional levels of maturity, although obviously some people can.
* Ages 8-18 or so seem reasonable to me; figure five-ish years of general education followed by five-ish years of either apprenticeship or serious academics.
finland does this really well — driving isn’t legal until 18. But you start gradually scaling up from 15 with high school driving education as part of the yearly curriculum. Finland has extremely safe roads, all things considered, and the legislative developments don’t seem to be enough to explain it: it is also cultural.
More importantly, almost everyone needs to be able to drive in such a sparse country, so almost everyone shares the costs and benefits from economies of scale.
Would be nice if alcohol, tobacco and firearms were treated similarly.
one thing that can make people grow up extremely fast is systematic child abuse. I don’t know of anything else that can make a child more attuned to the concerns and whims of adults (who, lets face it, are the bizarre creatures a human has to be able to negotiate with to be considered “adult”) as quick.
There are battered ten year olds out there who can wash dishes, do tax returns, and peel vegetables like their life depends on it, because sometimes it does. And in the past perhaps this inadvertent strategy was worthwhile because nobody had the life expectancy for the downsides to be cripplingly significant, at least within a generation.
There is so much more for an individual to know now, “common sense” is a fiction. People remained in their contexts, people had physical access to the few(er) experts they needed to negotiate life, social control was much tighter.
I think we have yet to even begin to see the full returns on “coddling” or “spoiling” the youth of today.
@aldi – “one thing that can make people grow up extremely fast is systematic child abuse.”
I’m strongly suspecting you’re using a non-standard definition of child abuse. In any case, while there are a great many nifty things about the modern world, it certainly isn’t obvious to me that it’s innately superior to all human existence prior to the 1960s. If Child abuse was in fact systemic, it doesn’t seem to have prevented people from creating prosperous, orderly civilizations full of happy people.
“I don’t know of anything else that can make a child more attuned to the concerns and whims of adults as quick”
Children have to be attuned to the concerns of adults; how else can civilizing them be accomplished? Whims less so, but adults being whimsical in the sense you imply here doesn’t seem like a good thing.
“And in the past perhaps this inadvertent strategy was worthwhile because nobody had the life expectancy for the downsides to be cripplingly significant, at least within a generation.”
Life expectancy is largely a function of infant mortality. People were living into their 70s and 80s in the Roman empire, which lasted for a thousand years. Ditto all the other great nations and empires, from the Brits in the 1800s to the Chinese in 2000 BC.
“There is so much more for an individual to know now, “common sense” is a fiction.”
Can you name some specific things? As I mentioned in the other thread, I didn’t get a “modern” education, and I’ve done just fine. Meanwhile, I’ve paid money for a college-level honors English class where a sizable percentage of the students didn’t know what a paragraph was.
“I think we have yet to even begin to see the full returns on “coddling” or “spoiling” the youth of today.”
…Do you spend a lot of time around the youth?
I get that there’s some hyperbole here, but the whole “guys just out of high school fought the Nazis, so what’s wrong with kids today” thing is … historically a bit ignorant. The biggest difference is probably that people are on average less physically fit (less working on farms, less walking around, more calories, etc). The men who were drafted (1/5 of men who had been registered for the draft, 10 million, plus 6 million volunteers) mostly did not end up in combat positions, so the guys who actually saw combat were a minority of a minority. Among that minority of a minority, the infantry absorbed a disproportionate share of the casualties.
Paul Fussell’s Wartime and The Boys’ Crusade – he was an infantry officer in the war, so presumably knew what he was talking about – paints a picture of young men who really did not want to die, led by officers barely older than them (he was 20 when he landed in France). Statistics (which have, admittedly, been challenged) suggest that a lot of soldiers refused to fire at the enemy, closed their eyes when they were firing, etc. Combat troops generally wore out after a few months of combat operations.
If you were to take all the 18-year-old men deemed medically fit, ran them through a bit of exercise and dieting, and then trained them to the standard of a WWII infantryman of whatever country, they would probably perform as well as their equivalents 70-odd years ago did. People are physically softer, and are probably more used to various creature comforts, but the whole “grampa fought the Nazis and now his kid needs a safe space” ignores that grampa was probably terrified much of the time.
This is a really excellent point. It’s also worth noting that there was absolutely no shortage of men who flocked to sign up for the armed services after 9/11 (recruiting offices were overwhelmed), suggesting that we still do have plenty of people who are willing and able to make sacrifices when they see an existential threat on the horizon. (We can debate endlessly about whether the threat was real, but my point is just that it was real to them.)
I tend to think that the main difference is that people live in an artificially-constructed bubble without self-sufficiency for much longer now than they did back then. It used to be that by age 17 or 18, you were out and working full time in a trade, and if you lived in a rural area, you’d essentially had a full time job with the family business for a few years before that. Now we keep a significant chunk of our population (certainly the upper-middle-class children that are always being pointed to when people complain about the infantilization of society) in artificial environments on someone else’s bankroll until they are 22 to 26 (depending on if they’re going to grad school). Real self-sufficiency is beginning 5-10 years later than it did as a result of increased schooling. And it’s not even close to an equivalent substitute.
I think the point is that people today COULD fight the Nazis too, if we expected that of them or gave them that option instead of coddling them
So are you advocating Robo-Hitler, or…?
Well back in the time of their grandparents, fighting nazis with guns was legal and the government even paid you to do so, providing you with weapons, transport and training for it.
But afterwards there was a huge amount of legislation making difficult or even impossible to fight nazis legally, essentially subsidizing nazism. That’s why this generation needs safe space, cause the government took away their right to shoot nazis.
If you deregulated nazi fighting our generation would quickly be shooting nazis in the face as their grandparents did.
By taking away all the Nazis worth shooting at. Deregulate Nazi-shooting, and nothing changes because the tiny handful of postwar losers who adopt name either die or shift to a different name, the end.
A stable society cannot in the long term depend on a safety valve of the form, “These people are tough, mean, capable hombres but you are allowed to prove your manhood / satiate your bloodlust by fighting them”, because you will eventually either run out of people willing to play that role or run into ones too tough and capable for your young men to handle.
College is now used as a proxy for “can hold a job and perform as instructed, and has minimal knowledge that is required of average member of society”. Why one has to spend 3 years in a classroom and spend insane sums of money to achieve that is a big question, and I suspect mostly because all other proxies has been killed off by various political reasons. There are ridiculous requirements from many professions in US, not only college requirements but professional licensing requirements. I’ve spent 35 years of my life not knowing a person cutting my hair needs professional license – until I came to the US. Billions of people still have their hairs cut by woefully unlicensed barbers – and their haircuts are undistinguishable from ones done by venerable licensed professionals in the US. But somebody has to pay these costs, somebody has to pay for all that cottage industry that teaches, examines, checks compliance, etc. – the industry that produces absolutely no value except for itself. I think if we sum up all such industries, and add a lot of money spent teaching Shakespeare and Catullus’ poetry to people who then proceed to use resulting diploma to achieve employment as sports reporters and marketing associates, we’ll find a lot of missing value. Not that I am against learning Shakespeare and Catullus’ poetry – but it shouldn’t be required for being employed as a marketing associate.
The consumer price index aggregates the cost of food, clothing, transportation, recreation, “other goods and services” and the sectors described here. If the cost of food and clothing grew much more slowly than the cost of other things, it would look like a bunch of sectors’ prices were spiking with respect to “inflation”, i.e. with respect to an average of that sector and other sectors, where the other sectors’ prices weren’t growing as fast. Can all of this be explained by enormous efficiency gains in agriculture and textile manufacture?
The paragraph refuting this idea seems inadequate.
1) Private school doing just as well. There are a lot of possible explanations for this. My best guess is a) that they don’t accept students midyear as much as public schools, and they don’t face the same sorts of consequences that public school administrators do when they expel a student. Keeping out or kicking out students who are hard to teach is a fantastic way to improve educational outcomes. A couple other possibilities: b) the person who described this is twisting the facts in a major way; maybe they’re great grant writers with connections to donors, and some of the donated time/money/infrastructure/land are not accounted for. c) the school is not an accurate demographic sample of the United States as a whole.
2) Indian health care. Of course it’s a quarter the cost! Comparing the prices of things in the US and India doesn’t seem to say much besides the fact that wages are lower there.
3) Grey market cheaper. This doesn’t factor in the cost of doing the research to discover the drug.
4) Saline at the hospital. To cover the cost of caring for the meth addict you described. Don’t hospitals overcharge for some things and undercharge for others? I don’t think looking at the most egregious case of a mispriced service suggests that hospitals could be run much more cheaply (maybe they could, but I don’t think this selected example suggests it). I think this example only suggests that hospitals could be pricing services more in accordance their costs.
Basically, I think the examples provided for “we could be doing this cheaper” don’t amount to much, but the brief nod to the possibility that different industries’ prices inflate at different rates seems quite plausible to me, particularly given enormous efficiency gains in agriculture and textile manufacture. Agriculture and textiles have capitalized on the gains from outsourcing much more effectively than education, health, and housing. If those sectors had been the ones to capture the efficiency gains from outsourcing, we’d be racking our brains desperately about why the cost of food had inexplicably risen dramatically with respect to “inflation.”
Re 2: Indian private health care is one-quarter the cost of Indian public health care, not of American health care.
Re 3: The drugs are generic in both countries. The drug no longer has IP protection in the US and is being sold by people other than the people who discovered it.
Got it. I should have assumed you would never make the mistake I was supposing you had re: 2. Sorry.
Re 2) So then we need something to explain how Indian public health care is so cost inefficient. Given that explanation, whatever it is, I don’t see how it could be the explanation for the broader issue, given that the US doesn’t have the same cost disparity between private and public hospitals.
Re 3) I don’t have much of an answer to this one. But suppose for a second it had been food in which the prices for that sector grew relative to the other sectors. How easy would it be to find two food products of demonstrably similar quality while one is many times the price of the other? I guess this is a question of blogger-degrees-of-freedom, similar to the piece about cardiologists you just wrote. But I might be willing to bite the bullet on this one.
Even biting the bullet on 3), the fact that some sectors have made efficiency gains from outsourcing labor seems like enough to explain why the remaining sectors have relative efficiency losses compared to the average of them all.
At least, that seems to be enough to explain education and housing. The fact that the US spends so much more on health care than other first world countries would suggest there are gains to be made in the US, and as you mentioned above, there are similar ones to be made in India. But that seems to be a problem of much smaller scope than having having multiple industries across the world seeing inexplicable efficiency losses.
2) doesn’t just need to explain why it’s cheaper in India, but why it’s cheaper in Switzerland where a Big Mac is $7 and takes 10 mins to earn….
Everything is more expensive in Switzerland except education and healthcare which is massively cheaper. This is an anomaly and if we can figure out this one and fix it, I suspect most other costs would adjust accordingly.
Even assuming that this is true, and these industries are simply experienced normal inflation based cost rises that are higher than any other industry, there is still the question of what exactly is causing such high rises? Whether we call it cost disease or inflation, one should still wonder what drives it year after year?
Wages would normally be considered one of the major drivers of inflation in most industries, but of course it was demonstrated that this hasn’t seemed to occur.
Raw material costs? But again, if this was the case we would also observe much greater inflation in primary and secondary industries, but we don’t. The costs of the basics, whether raw materials or electricity or water, have not rocketed up.
There are probably industries which have not taken advantage of outsourcing or other efficiency gains, yet which have not seen inflation to match with these industries. There should be more to it.
Indian health care is cheap relative to middle class wages.
Here’s an anecdote. I had spine surgery at one of the best hospitals in Pune for 1.4 lac rupees – about $2000 – and I’m told I could have spent half that if I shopped around. This is 1-3 months wage for a software engineer or other professional. In the US, a similar surgery might cost $100-200k – approx 0.5-2 years wages for a software engineer or similar professional.
To make a similar comparison, a month’s rent in Bandra or Colaba (posh neighborhoods of Bombay) is in the same ballpark as major surgery. Can you get surgery in NYC for approximately 1-2 months of rent in the Upper West? I doubt it.
You could use the GDP deflator, which doesn’t favor any industry, and compare that to the cost of these things. Still though, the deflator is itself affected by the cost of these things so I’m not sure how much more useful it is.
The second “discuss this phenomenon” link seems to be missing an initial “http://”.
Love this post! It really ties together a lot of loose thoughts that have been bouncing around in my head about the economy of the modern world. Everything costs more, and it’s not just simple inflation, but no one’s making more money- so where is the money going?
I’ll take a stab at one small piece of it- the pension crisis. Public Pensions have very strict risk tolerance rules (no one wants their pension fund to go broke in a market crash!) so they tend to invest heavily in high-quality bonds. Notably, the US Social Security Trust Fund is entirely invested in treasury bonds and US securities. But with interest rates being so low these days, the yield from bonds just isn’t enough, so they require a larger and larger nominal value to fund the same yearly pension. It’s sort of the same as how 1st world governments have been able to run up ridiculously large nominal values of national debt, but the interest payments are still manageable. If we had 80’s level interest rates, this level of national debt would be disastrous, but public pensions could also get by with a much lower amount of bonds.
My guess is that it ultimately reflects the slowing of productivity growth in our society. There are no groundbreaking leaps in productivity coming down the pipeline, and the magic of finance has factored that into today’s prices by making us pay much more in the present for a secure retirement than we would if we could expect the future to be significantly more prosperous.
A huge proportion of this seems to me to be rent-seeking and monopoly power. People will charge as much money as their customers are willing to pay, and what people are willing to pay depends on how much money they have and what the alternatives are. Right now, there are no practical cheap alternatives to the classic model of hospital stay health procedures (there are in some markets, which is probably why plastic surgeries and lasik go down in price where other healthcare doesn’t). This means that hospitals are free to charge enormous amounts of money to anyone who has it (rich people and insurance companies), and let the poorer rot. They are in fact incentivized NOT to provide a cheaper option, because if they did it would cut into their own profits. If the hospitals/doctors didn’t have a legal monopoly on medical treatment (and the medical schools an oligopoly on the supply of doctors), there would be room for both luxury medical treatment and more normal service. Of course, this is exacerbated by the fact that medical schools artificially limit the number of doctors, and that the law sharply limits who can and cannot provide all sorts of medical services based on arguably outdated criteria.
Similarly: Top universities have a monopoly (or at least oligopoly) on that form of prestige, and so charge exactly as much as they can wring from students, which is enormously increased by the fact that, via student loans, they’re charging the government, a famously price-insensitive purchaser.
Buyers who are willing to pay whatever is asked are another factor here: In college, public transit construction, military spending, and many kinds of healthcare, the people buying it are not looking to save money. They’re looking to achieve a specific goal, whatever the cost. This provides untold opportunities for prices to balloon.
I feel like there’s more to say on this topic but I’m kinda falling asleep so I apologize if this is incoherent.
PS: With regard to housing, it’s not exactly a spooky mystery as to why prices have skyrocketed in certain areas. It’s literally illegal to building housing at a high enough density to make it cheap in a lot of jurisdictions. In other areas, it’s not illegal, but it is rendered unprofitable by various regulations and laws which usually have a barely-concealed NIMBY motive.
I don’t think there’s a huge mystery here, we already know intuitively which of these industries are affected. When you run into an expense, you don’t need an estimate to know whether it will be fast and cheap, or an enormous, expensive hassle. And the expensive industries all tend to have one thing in common: licensing schemes.
State-issued licenses have absolutely exploded over recent years, and they restrict labor supply in loads of fields. Law, medicine, teaching, engineering and architecture… these are the areas where lay people are stuck racking up hour after hour of labor and paying out the nose for them. And it’s a big change from 30-50 years ago.
(In fields with little regulated licensing, credentials and degrees can still have this effect).
Get rid of licenses for most professions and allow people to rise and fall based on their reputations and achievements. This will do more than anything else to end cost disease.
If the issue were licensing, this would show up in the form of increased salaries. As you can see in the OP, this does not plausibly account for more than a small portion of the cost increase.
Certainly licensing is part of the reason why doctors are paid so well, but they’ve been licensed and well-paid for a long time; this doesn’t explain the recent cost increases.
Not necessarily…it could (does?) result in more headcount than higher salaries. Total industry wages would be the right metric. When i was in high school, the administrative staff was very small…i’m guessing it looks different now.
I made exactly this point upthread in response to a different comment, and I think increasing headcount is definitely part of the story, but it’s not consistent with a story about pricing rising due to stricter licensing requirements driving down supply. If the supply of doctors decreases, you can’t increase headcount without paying them more.
But you can increase the headcount of nurses, nurse practitioners, x-ray techs, secretaries, etc.
This is definitely going on in education. Headcount at public schools rises dramatically, but class sizes stay the same (or increase) because all the new people being hired are administrators or specialists rather than regular full time “in the classroom” teachers…
Perhaps the things that are getting more expensive and not improving are getting more expensive because they are not improving.
If you want to be the hippest phone-user on the block, you only need the latest iPhone. A new one will come out in a year anyway, before most of the plebs have had a chance to catch up, so you don’t need some sort of super-duper iPhone. Same goes for TVs – the rich can buy the latest 4D curved Uber-HD TV, and by the time the price falls enough for Joe Average to have one too, they can just move on to holograms or whatever. On the other hand, if the plebs’ education/health-care/etc is about as good as yours in objective terms, with no major improvements on the horizon, you need some way to signal that you’re rich and high status, which results in a zero-sum battle to get into the most prestigious university/hospital/etc.
This doesn’t explain the subway bit. I suspect over-regulation (and regulatory capture) and increased risk-aversion do play a role as well.
An addendum to this is that I’ve started to see more stuff about golden or jewel-encrusted phones in the last couple of years, just as the rate of improvement in the technology has started to slow down. If this becomes a trend among the rich, Joe Average will feel some need to keep up lest he become Joe Below-Average, so he’ll at least spend more to get a fancier case (etc) and the amount spent on phones will start increasing even as they stop getting much better.
I could see this working with education, but do people really signal all that much over healthcare?
It doesn’t seem so obvious to me either, but Robin Hanson has claimed as much in the past, and people do at least talk about their medical problems a lot if you give them half a chance. Perhaps also, being in a fancy hospital (which therefore costs more) feels higher status even if there’s no one there to see you (and usually you’ll have visitors too).
do people really signal all that much over healthcare?
Apparently people used to have their appendices removed, even if healthy, because it was all the rage to have this new operation. From a periodical of 1903:
Humans are strange creatures.
The starting point is government regulation, but government regulation that favors business. I first noticed it when I was an undergrad back in the days when credit reporting was a top secret operation. Eventually it was promoted in Congress and there was a proliferation of these businesses. The main argument was that better access to credit would grease the skids of capitalism. Politicians and their business partners never consider the cost of bankruptcy due to credit promotion, fees for credit reports, and innovations like extra charges based on your credit score. They certainly did not take into account identity theft as a massive cottage industry that was essentially unpoliced for decades or the continued cost of politicians facilitating all of that by making Social Security Numbers national identifiers for the credit industry. There is no better model of the government and their business partners inventing a multi-billion dollar industry that essentially adds little to no value to the average citizen. In terms of financial services, I would also add the 401K/403B programs that are currently being reassessed as retirement vehicles that probably have little value to most holders, but major value to the financial service companies collecting fees for decades irrespective of mediocre financial performance. Another government invented business to benefit the few.
In healthcare, the model is no different. Outrageous costs compared with other countries essentially for a bloated, high cost, low value to no value management structure. That same management structure rations healthcare by government mandate in order to make corporate profits at every level. It is a unique business model that can ration goods and services, provide an inferior product, and do it with abundant hype from the government. PBMs, EHRs, MCOs, and Big Pharma are all essentially set up to extract top dollar from the American people.
Prices have nowhere to go but sky high when your economy operates on a model where the government can invent businesses, set them up to get as much revenue from a population held captive by threat of financial ruin, and have them run by managers who have no idea what it is like to compete on price and quality. There is no incentive to provide a quality product when you have a guaranteed income stream.
In healthcare these days – all the managers have to do is advertise quality. The average citizen should be seeing more “Top Hospitals” than really exist out there.
> Once again, nobody seems to have been offered the choice between expensive hospitals with private rooms versus cheap hospitals with roommates.
You touch on this point a few times throughout the article, and it definitely resonates with me: I recently spent some time backpacking, and my time in India reflected this in spades. I’ve been to India probably a dozen times to visit family, but this was the first time I heard (and experienced) the phrase “sab kuch milega”, from other backpackers. It loosely translates to “everything is possible”. It’s an oddly rationalist feature of Indian culture: people are willing to be pragmatic in a “both of us are better off so this makes sense” kind of way that the US has a bunch of weird taboos and norms around.
As one example, I showed up at 3 AM at a hotel after getting off an overnight bus, and they didn’t have any formally free rooms until check-in time (around 10 AM). What they did have was a room where the residents had paid for the night and left at an odd time, around 1 AM. In any American or European hotel I’ve been to, they would steadfastly refuse to let me sleep in a room that was technically not available until housekeeping had made it look like no one had ever been there, as you find in every hotel. It smelled a little like cigarettes, they gave me a clean extra blanket, and I slept on the couch, but it was way way better than my deliriously sleep-deprived self camping out in the lobby chair for seven hours (or walking around an unfamiliar Indian city in the middle of the night). I know this is a weird example, but it seems like a fundamentally irrational facet of American culture that businesses are so unyielding at offering anything but 100% service at 100% price.
I think this might be a symptom of market consolidation (though I could be wrong). If you are running a small hotel all by yourself, you can take some risks. Maybe you let this guy sleep in an unkepmt room and it all works out; or maybe he gets angry about the cigarette smoke and decides to sue you; but it’s your own choice to make in the end. On the other hand, you need to grab absolutely every bit of money that comes your way in order to survive, so you’re incentivized to risk a little.
But if you’re managing hotel #1,234,567 of some mega-chain, you can’t afford to take any risks at all. Your mega-corp is a nice juicy target for lawsuits, while the marginal benefit of acquiring one more customer (and at a discount price, too) is basically epsilon. On top of that, you are not accountable to yourself, but to the corporate office of HotelCo, Inc. — a corporation so large that the only way they can manage their business is to set specific metrics and rigidly adhere to them. If you screw up even a little, you will instantly lose your job, so the risk just isn’t worth it.
Note that this falls under Scott’s 6th suggestion — insitutionalization leading to effective lower risk tolerance, due to institutions having lower risk tolerance than individuals.
Yes exactly; but, on top of low risk tolerance, you have a very high incentive for total standardization. Firstly, because the only way you can manage your massive business empire is to make every franchise the same; secondly, because your customers actually expect the same exact service at every location. That’s one of your primary market advantages, in fact (other than economies of scale, which, once again, require total standardization).
This sounds weird to me. Intuitively, I would think that a large institution has less relative risk than an individual due to its size, and thus should be more risk tolerant. What mechanism cancels this effect out?
A large institution may have less relative risk, but greater absolute risk.
If you sue a single hotel for a significant amount of money, you’re likely to end up owning a hotel, and you don’t want a hotel. So you’re mostly limited to whatever the cash value of cash-on-hand plus second-mortgage value of the property plus insurance value is. So the max take-home from a lawsuit, minus legal fees is going to be less. Plus any jury is going to look at a mom-and-pop business with a lot more sympathy than a mega-corp.
If you sue a hotel chain, they have a huge cash flow that can be tapped or garnished. Their lawyers make it harder to win. But winning means you go home with a big pile of cash rather than a small pile of cash and a title to a hotel. This makes it more appealing to sue for both lawyers and plaintiffs.
Yeah – you let Random Guy have the room for a few hours instead of cleaning it out and then the next day he refuses to pay because it’s wasn’t a “proper” booking or he couldn’t use the bed because the sheets were dirty or whatever. A small business can absorb that and decide the next time they’re not going to be so obliging; working for a large chain, you will be eaten for not following procedure and maybe even fired.
Hotels charge in advance.
I take your correction 🙂
My point was more that an owner-operator has more freedom to take risks than someone working for an organisation that sets one set of standards for all its units nationally or globally.
Could it be that the costs are increasing because technology enables us — and, in some cases, requires us — to obtain more valuable goods and services ?
For example, back in ye olde days, cancer was a death sentence. If you got cancer, you simply died, and the only costs you would incur at that point would be the cost of your funeral. Today, we have medicines that can mitigate at least some of the cancers (and, in rare cases, eliminate them entirely). These medicines are expensive, and, since about 15-20% of people will get cancer at some point, they may have a non-trivial effect on the total cost of health care. But it gets even worse, because medical R&D costs rise exponentially as all of the easy problems are solved one by one, leaving only the very difficult ones.
The same might be true of education. In ye olde days, an average child only had to learn readin’, ritin’, and ‘rithmetic in order to become a fully contributing member of society — or, perhaps, in order to enter a profession that won’t damn him to lifelong poverty. Today, that same child needs to learn things like algebra (or even calculus), a foreign language (preferably more than one), basic computer skills (or even computer programming), physics, and so on. To make matters worse, society has become a lot more complex, and there are many other mandatory activities that demand the child’s time (unless he wants to become a recluse, I suppose). Education has become a lot more difficult.
This might even apply to basic living necessities. In the old days, “food and shelter” translated to “porridge and a wooden hut”. Today, it means “a balanced diet and air conditioning”. And our society practically requires access to things our ancestors could not even dream of, such as cellphones, mass transportation, and vaccinations. As our quality of life rises, so does the cost.
This sounds reasonable, but also sounds like it wouldn’t account for the enormously high rate at which costs are increasing, and the fact that they aren’t increasing as quickly outside the United States.
I don’t think education is any more functional than it used to be. Most people don’t used advanced math. Foreign language classes are practically useless and more classes are crammed in that students end up forgetting anyways.
i think what he’s missed is that most of the things he listed are not necessary, merely treated as though they were
What’s the difference ?
What you observe is fifty years of optimization of wealth extraction.
Price outcomes depend on the contributions of hundreds of participants.
Every participant optimizes his/her earnings, exerting a constant upward pressure on price.
Participants become ever more expert at getting rich. Wealth-extraction schemes (scams) are refined and optimized (in all markets), and price increases are pushed downstream (in markets where buyers can’t push back).
Radical price increases reflect markets where consumers have reduced ability to push back:
– complex markets (can’t understand)
– opaque markets (can’t see)
– entrenched/highly-regulated markets (can’t modify)
– necessary-to-keep-living markets (can’t avoid)
– limited-quantity markets (really want)
– intermediated markets where the end buyer doesn’t decide how things are purchased (don’t choose)
Some systems are resistant to contributors’ efforts to extract wealth and some systems are not. There’s an equilibrium between cost and readiness to pay.
To reduce the costs in expensive domains, willingness to pay the high costs has to be reduced. As long as the buyer won’t or can’t say no, costs will increase through the entire production process.
There won’t necessarily be one big obvious rip-off, but every participant will optimize the heck out of his contribution and the overall pressure will push costs up.
Could one provide a cheaper alternative in these domains? Sure for a little while, but if the bottom line is that people are willing to pay more for the service the prices will creep back up.
The only exception would be where the new, lower-priced, alternative sets a new standard and buyers refuse to continue paying the old prices. See
https://stratechery.com/2016/dollar-shave-club-and-the-disruption-of-everything/for a great article about this.
This is getting too long, but it reminds of a thought I often have. People imagined (as you mention) that all the increases in productivity would result in a reduced work time.
But, this idea didn’t take into account the balance of power.
Increased efficiency just increased benefits for owners. A worker is just a cog; increasing efficiency puts ever more pressure on the cog.
As with cost-disease, if the cogs aren’t in a position to say no, the pressure to optimize them will just continue unabated.
I may have just convinced myself of the value of a universal basic income.
This is an absolutely spectacular summary of how and when markets fail. Bookmarking this to point people to later when they complain about how the market obviously will/won’t work for case X.
Well I guess food prices are about to go through the roof!
They do when there’s a constraint on supply, because demand has severe constraints on its elasticity, unlike luxury goods.
I didn’t take his list to be, “here are factors that will instantly halt a market”, but rather, “these are factors that will upset basic supply and demand equilibrium and distort markets.”
The standard Econ 101 concept of a market assumes that both participants in a transaction have the option to decline and seek other bargaining partners. Cases where that isn’t possible for whatever reason (government regulation, private monopoly, externally-imposed time constraints) are necessarily market-distorting.
Individual choice can’t be the sole preserve of the consumer. Workers have to be able to say “this work is stupid or horrible” as a mechanism to (a) improve conditions (b) stop pointless work.
We want the power to say “no”.
No, I don’t want to buy that. No, that idea isn’t “exciting”. No, I’m not going to that meeting. No, no, no, no, no.
(Actually, thinking about it, some of us already have this power. I exercise it daily. I think more people should do the same. Just say “no”. (I get great pleasure from walking around a shopping centre thinking about all of the things I’m not going to buy.))
That is a very abstract comment, so it is very hard to pin down what you are saying. You could apply it in many different ways to a single example, to produce many contrary claims.
But the most concrete points you make are all wrong in these examples. Most of Scott’s examples are non-profit, so what do you even mean by the owners? In the for-profit examples, the owners did not do very well. You say that one could produce a low-cost version, but the willingness to pay more would push the price up. But that is confusing price and cost. The owners would love to sell a low-cost version at list price and pocket the difference. But they don’t. Much of the cost is going to pay more people, and to pay them less. That is not efficiency.
(By these examples I mean education and medicine. Housing is pretty different, so that even your concrete statements do not seem specific enough to evaluate.)
Thanks for your answer. I will try to find time to respond because I think your criticisms are valid but it will be a few days.
One scenario along these line is that people are tempted by, say, price discrimination and build big organizations to do it, but they aren’t actually profitable and just pay for the apparatus of price discrimination. Hospitals and insurance companies do have big departments whose job it is to battle each other, accomplishing nothing in net. Colleges are very good at price discrimination, but then what can they do with all the money? They are forced to spend it, and perhaps have no better idea than amenities.
But medicine is organized very differently in America, Britain, and France. And yet all three have seen medical spending grow at about the same high speed. American spending has been double for decades. If these factors were important, why wouldn’t we see divergence between Britain and France? Why not further divergence of America?
From your analysis, it sounds [intuitively] like easily available credit would be a[the?] major driver of cost increases.
>Increased efficiency just increased benefits for owners. A worker is just a cog; increasing efficiency puts ever more pressure on the cog.
This is just recycled marxist progressive emiseration. It was wrong 150 years ago, and it’s still wrong today.
What’s the argument against it?
it’s demonstrably, empirically false, for one. the claim is that the poorer are getting poorer absolutely. they clearly are not when you use even remotely objective measures. It also assumes that the workers have zero bargaining power because they don’t own the means of production. Putting aside the fact that they can always use part of their wages to buy said means, what is the means of production to a software engineer or sales rep? The most valuable sort of capital in the world is not physical capital, but human. machines are much less valuable than people who know how to design, build, maintain, or operate machines. And human capital is, by definition, owned by the workers.
I will try to write more later, but I didn’t mean to imply that all workers are cogs. I meant that to the extent that you can’t say no to a job, you’re a cog. Software engineers have much more flexibility in their employment than burger-flippers do.
I meant to draw a parallel between looking for a job and shopping for health care (for example).
>I will try to write more later, but I didn’t mean to imply that all workers are cogs. I meant that to the extent that you can’t say no to a job, you’re a cog. Software engineers have much more flexibility in their employment than burger-flippers do.
Everyone has options though. Your assertion implies an effective conspiracy by all employers against employees. And more than that, just having an employee who’s been around for a while is hugely valuable because they’ve picked up lots of highly specific knowledge. I’m currently involved in replacing two people I used to work with who left for better options, and while the replacements will have similar credentials, it will be 6 months at least before they’re producing at the same level. Paying them more to stay would have been a good bargain for the company just because of the cost of training replacements.
What you observe is fifty years of optimization of wealth extraction.
but why in only one capitalist country, out of all of them
It is true in all countries, but it just looks different because the markets are different and willingness or ability to pay varies. I don’t have time to answer fully right now, but even in a place like India, the cheaper health care is just a result of the fact that people will do without rather than pay higher prices. Americans are willing to pay more, that’s all.
I live in France and although health-care costs are lower than in the US, they’re maxed out here too. Every bit of the system is extracting every last bit of possible profit. But, the constraints are different at every level than in the US: in terms of the legal framework, the tax system, the media etc. So, the millions of factors that make up health care rates give a different result.
but why are Americans uniquely willing to pay more? doesn’t it sound like maybe there’s something else going on here? because your explanation is very close to just special pleading really
Well, we’re richer. But we’re not 4-10x richer.
No, we’re quite a bit more than 10x richer.
2015 GNI per capita (Atlas smoothed) in USD for India was $1,600. For the US, it was $55,980.
Remember, in the West, the Indians we interact with are a very non-representative sample of the upper middle class. Don’t forget how many people live in abject poverty in the country!
Than Europe or South Korea, I mean. We are indeed more than 4-10x richer than India.
In Canada, people are living on higher and higher levels of debt. In regards to mortgages, more and more people are over-leveraging themselves (taking on debt something like 500% of annual income); this says to me that no one has the ability to say no. They’re demanding a certain quality of life, and they won’t be denied. They’ll pay any cost. 🙁
There’s another problem with our cult of efficiency: The things we sacrifice to get it.
From a distance, spare capacity and flexibility look exactly like wasted resources. So it takes a great deal of courage and a good understanding of the process not to cut it out. With the move to make employees as interchangeable as possible, few people are in a position to make courageous decisions. And the ever increasing complexity, those few who are will lack the detailed understanding to make these decisions in an informed way.
But systems need a bit of slack to work properly; otherwise, they will get stuck, and break if you force them. And it will always, by definition, be for “unforeseen reasons”, so nobody is responsible, has to fear getting sacked for it, or is in any way motivated not to make the system more fragile by getting rid of reserves. On the contrary, getting rid of reserves is where their bonus comes from.
What we now have are a lot of things that are incredibly efficient in theory, but break all the time in expensive ways and are barely held together by ad-hoc solutions (and occasional disregard of rules) in practice.
I have a hard time seeing the problem being the “cult of efficiency”.
1. Industries that are focused heavily on efficiency and optimization (manufacturing, agriculture, software development, etc.) are not experiencing the kind of increasing costs discussed here.
I only have an outsider’s view of the culture within these fields, but from what I have seen, they seem to put a lot of effort into profit maximization.
2. Industries that are experiencing cost bloat (education, healthcare, etc.) are not focused on revenue maximization.
In my university library there is a corner devoted to media development. On one of the walls is a display of seven (7) televisions artistically arranged in an asymmetric pattern so that they worthless for anything other then decoration. They are mostly used to display a mural of pretty pictures. I am willing to bet that with just the electricity cost of those televisions over a two month period you could pay an art student to paint a halfway decent mural.
Could we have shortages of a lot of things? This came to mind when you mentioned housing, since the only topic in your post that I’m familiar with is housing politics in the Bay Area, and here the problem is absolutely that there’s not nearly enough housing because zoning laws are super restrictive, and even within the allowed limits the political process provides a lot of opportunity for NIMBYs to obstruct development. So the supply of housing has been barely increasing while the population has been increasing a lot, so housing is absurdly expensive.
The story you tell about hospitals quickly kicking people out postpartum makes me wonder if the supply of hospitals has been increasing too slowly as well? I don’t know anything about how hospitals get built/expanded – any obvious obstruction there?
For universities, at least the supply of *top* universities doesn’t change much even as the population grows – at least when I was applying to college in 2008 there was definitely something of an “arms race” among ambitious college applicants as people were applying to more and more schools in the hopes of getting into any prestigious ones. (But also, as you mentioned, universities give a *lot* of financial aid, so I wonder if the cost of college that you cite is actually reflective of what people actually pay – at least at my school I believe the majority of students receive some financial aid, and it’s all grants, so while the nominal price is absurdly high, not that many people actually pay the whole thing – it’s basically a way to have a sliding scale without calling it that.)
I’m not sure if this would apply to public schools since people don’t pay for that individually, though, so that would still need to be explained.
Regarding the supply of hospitals: http://www.ncsl.org/research/health/con-certificate-of-need-state-laws.aspx
What about approaching the problem from the other side? What are the industries which don’t exhibit cost disease? Groceries? Restaurants? Cars?
Then you could see if there is a common factor in those industries which is missing in the affected industries.
Boring answer is that the common factor is high/low cost growth. After all, why should industries with high/low cost growth see costs grow for the same reason? There are an awful lot of industries.
I agree with Michael Cohen, isn’t it just that “inflation” is being underestimated, and the increases in infrastructure/health/education costs reflect the true rate of inflation? If inflation is currently calculated based on the cost of food, clothes and consumer goods, and those things are getting cheaper (due to improvements in manufacturing and farming tech, and globalisation and sweatshop labour), then goods and services which are getting more expensive at a normal rate, unaffected by these reductions, will appear to suffer from cost disease.
I would argue that we’re way past the point where the simple concept of inflation is useful for discourse about the economy. The subject of price level changes is incredibly complex and increasingly important, but for some reason all discussion on the matter has been stunted by this incredibly simplistic model. We need to get past that way of thinking and starting talking about the interplay of price levels, how our spending is shifting between different types of costs, etc.
Doesn’t the concept of inflation come from changes in the money supply? In practice it refers to the increase of prices, but shouldn’t it only mean the increase in prices due to this one mechanism (that is of course hard to quantify)?
That’s backwards – increase in prices is the definition, money supply is one mechanism. Some discredited theories assumed that only a change in money supply could cause inflation, but increasing monetary velocity does the same thing – as can higher borrowing, etc.
If you view it that way, you’re still just shifting the puzzle, though, namely to: why are real incomes falling to precipitously?
I thought of this as well, so I looked it up and turns out the CPI does explicitly include the costs of things like medical care, education, and housing. Of course, that’s combined with other costs such as food, apparel, transportation, and recreation, and it’s hard to know (with my limited knowledge of US economics) whether either of these sets of categories skews the other.
Could it be because money supply has increased by about as much?
Here’s M0, M1 and M2
That’s damned interesting.
My prior understanding of the money supply expansion is that it has mostly been successful in keeping inflation at the expected rate since we deployed it.
However, until you provided those graphs in this context, it never occurred to me that nothing would cause the expanded money supply to distribute throughout the economy evenly.
Also, if the inflation rate is commonly measured by the basket of regular consumer goods, does that mean they just kept pumping money into the economy until they overcame the efficiency gains in those goods?
This causes me to suspect that it might be a better idea to inject that money at the bottom, rather than into the large banks as they do currently. The money consumers spend does diffuse through the economy pretty effectively, as I understand it.
All your reasons are good candidates but one factor stands out. This factor also explains why correcting for inflation is so hard. It is this: the same service / product / object now has a lot more features / higher quality, even as it is still categorized the same thing as 50 years ago. This distorts the entire comparison.
You give these examples yourself: college includes a lot of goodies it did not use to include. Houses are bigger or have more goodies inside. Cars have a/c, GPS, and tons of safety features they did not use to have. Jobs come with way more benefits attached. The client at the hospital gets a larger settlement when s/he sues. Available treatment options are much higher. A lot of times, as you point out, the end customer did not ask for the multiplication of benefits in the package. Regulations did. In part, this was for safety, in part for environmental reasons, in part because of the wrong choice of risk trade off (overpaying for a very small reduction in risk). But the bottom line remains: we are not getting the “same” product we did 50 years ago. And if we did, many of us would be complaining.
35 years ago my grandpa used to use tracing paper to wipe his bottom. And thinking of it, that was what they used in schools too…
You might have been joking. But, perfect example. Today’s toilet paper is probably vastly superior AND produced in a more environmentally friendly way to the stuff of the 1960’s. Probably by orders of magnitude. None of this counts for inflation adjustment.
My parents used to use tracing paper too when they were at school (1960s/70s Britain).
Yep! It’s true – you never forget having to wipe your bottom with tracing paper. I used to avoid using the school toilets for that reason – probably saved on toilet cleaning costs as well.
(That last one was a joke.)
I remember once in the early 2000s I came across a public toilet in an out-of-the-way part of northern England which still had tracing paper instead of normal toilet paper. As you say, a truly unforgettable experience.
Don’t remind me, Mark. I never thought of it as tracing paper, but we’d have been better off with old newspapers! 🙂
Wouldn’t it be useful to look at where this is and isn’t happening in the economy? Housing is admittedly an outlier, but health care and education are heavily influenced by government provision or regulation. Contrast with technology, transportation, food, clothing, personal services, etc. And then compare to sectors of health care that are less regulated/subsidized like cosmetic surgery or lasek surgery. Again health expenses for pets goes against the grain, but the trend seems relatively clear.
This is just some vague inkling – but I wonder how much (somewhere between 0% and 100%) additional administration jobs and duties are responsible for this?
Going to one of Scott’s many points – people doing extra work that amounts to pushing paper around in support of the various goods and services we actually want, rather than the goods and services themselves. Transaction costs, basically. Creeping Overheard.
Is there some simple way to gauge percentage of administrative jobs? Is there some way to compare how many direct-service people are working now vs 30 years ago? Has our population grown, but the extra population taken on fruitless jobs.
To clarify, what I mean by fruitless is this: How much do you pay in taxes this year? No idea? Pay an accountant $500 to figure it out. This money doesn’t go towards paying your taxes, or helping anyone (and lets assume the accountant could have spent the time accounting something more fruitful elsewhere). It doesn’t acomplish anything except what number to write on the check you mail to the government. It’s an annual government-mandated math test, where you solve some equations and forms, and then you get the arbitrary number out, and then that number serves no more usefulness.
I’m not disagreeing with counting your taxes, or necessarily saying “flat tax solves everything.” But when you get down to it, that’s an activity we spend time and resources on that gets nobody what we want. It’s just overhead. And we could get rid of it if we deemed the side-effects from the increased coordination as insufficient to warrant the cost.
Also, how much have computers played a roll in letting us do a bunch of things we could never do before… that don’t actually give us any benefit? A lot of these trends seem to track with widespread computers. Pure coincidence, or maybe causal? If you couldn’t use a computer, how would your job be different, and how would your overall end-product be different?
The cynical part of me says that college is much the same thing – spending 4 years researching medieval Renaissance poetry? it gets you a degree, and gives you a leg-up on getting a job, but you spent $100,000 of resources and 4 years of your life doing something arbitrary that won’t actually make you better at your job. It’s just a signaling arms race – you need a degree because your competitors all have degrees, and they provide some sort of correlation with reduced risk of employees.
So, how much time, and manpower, and resources are devoted to tasks like these? How much of population spends time doing that now? Are we paying more for things because there’s just a ton more overhead? How can doctors see fewer patients for less time? Are more people going to a hospital than normal? If they’re paying more, and doctors are paid the same, why wouldn’t more doctors just be hired? Are doctors doing more administration work and overhead now than before?
Lot of random questions – no coherence. Sorry about that. But I’m just trying to figure out where all our time and resources go – and if there are some easy large numbers of administration or ‘time-spent-doing-actual-job’ metrics that can find out where this productivity sink is.
The only thing I can conclude is that for some ethereal reason, we have two task masters that come by at noon and midnight – one wants a hole in the field, and one wants no hole, and we’re just spending a bit of our time digging it up and filling it back in because that’s what we’re being told to do. And over the last 30 years, they’ve started coming by more frequently, now every other hour, so we have to dig and fill 12 times as many holes a day.
The mind boggles. It also might be a good argument for founding new micro-countries every now and then on artificial islands, or in South America, or in Coventry (The Crazy Years are here, after all) where the entire dynamic of society and business and productivity starts from scratch, to see how much of that lost efficiency has just evolved over time and can be removed wit a hard reset.
Gauging administrative versus teaching jobs in schools is easy;
The data has more info Librarians, Teachers Aides, etc. But headline figures:
District Staff total: 33,642
District Staff total: 65,282
District Staff total: 75,868
District Staff total: 133,833
Well, for schools that doesn’t look like much n terms of asymmetric growth. Teachers x3 and Admin x4 from 1950 to 2010. Not insignificant, but not gigantic.
Any indication on how many students are being serviced by all these teachers and admin? Have the number of students tripled since 1950, or are we employing a lot more people – teacher and admin to instruct the same number of kids?
It’s about the same number of kids – though their demographics have shifted a ton, and we are including many more kids that used to drop out early.
One possible answer is Debt.
The difference between now and then is basically debt. Every publicly traded company carries a large amount of debt. Every government has an enormous amount of debt. Every person has maxed out credit cards, is mortgaged to the hilt, and can barely make their car payments. This constant, non-stop load massively increases the price of everything.
A bridge is no longer a bridge. It’s a source of long term interest payments. A mortgage isn’t a mortgage – it’s 25 years of AAA-rated securities paying 4%. Companies have to carry ‘poison pill’ debt to stop corporate raiders using junk bonds to take them over and break them up. A car isn’t a car – it’s £1000 downpayment, easy monthly payments of £300 for four years.
It doesn’t explain everything. But the enormous debt burden we as modern societies carry is out of all proportion historically. It’s also the reason inequality is increasing – those that own or manage the debt get rich, the rest of us carry this growing parasite.
Data indicates that current Government Debt/GDP ratios aren’t unusual for post-war America – so you’d need something else to explain what CHANGED since the 1950s.
And the change in Household/Corporate debt doesn’t really explain it well either, either in terms of correlations over time, or in terms of microeconomic theory.
According to your data, the US government debt-to-GDP ratio averaged 66% during the 1950s, compared to 105% now. I would think that increase is a strong candidate for something that has CHANGED since the 1950s.
If by “not unusual for post-war America” you mean immediate postwar America, 1946-1947, sure, we are as indebted now as we were when we had just started paying off the debt from World War II. And suffering a variety of substantial economic hardships, though hard to quantify on a price basis because they involved things that hadn’t been manufactured for sale in half a decade or more (e.g. new cars and houses).
You want to look at the entire graph, instead of picking on a specific year.
Did all of this cost inflation start in 2008, to coincide with the huge rise in government debt? No – so this functions as evidence, albeit weak, AGAINST the explanation.
You picked “post-war America” and “since the 1950s”, not me. You were wrong about the facts for the specific eras you picked. Perhaps you might want to sit back a while and think about how to more clearly express whatever argument you are trying to make.
But you agree that cost inflation and debt are effectively uncorrelated based on the data?
Hm. You’re right, I find it really hard to believe there’s an underlying thing here, but your evidence is persuasive, so I think I probably *should* do.
My first thought was exactly about safety. When building the first subway tunnels, how many people died? How many houses collapsed? I don’t know, but I *hope* we’ve got a lot better about that kind of accident (I don’t know if it was worth it). Life expectancy has gone up quite a bit, does that account for increased healthcare spending?
But you’re right, a similar thing seems to happen in several different sector,s which suggests it needs related underlying causes, not a case-by-case justification.
My next thought was, has anyone done a decent country-by-country comparison? The UK sounds like America if possibly less so. But do all countries have the same tendency? Or do some have cost disease in construction but not healthcare, or vice versa?
FWIW, from outside the US it just seems like the health insurance industry is why US healthcare is SO expensive. But it seems like healthcare costs are increasing in most developed countries too, even if America’s is higher, so there’s some other problem that’s not dependent on that.
As Megan McArdle has pointed out, the US doesn’t actually have a health-care cost growth problem. The US had a health care cost growth problem in the 1970s and 1980s. Since then, our cost growth has been at about an average level for OECD countries, but because it was starting from a higher baseline, the total expenditure per capita is almost double the average.
Most simple explanations for how to save money that people come up with (“bulk negotiation!” etc) don’t apply here; read the linked article if you’re curious, as she goes through a lot of explanations and tears them apart in great detail with a lot of data. The one thing she doesn’t address that’s a frequently-cited significant cost factor (but not nearly as significant as people think; prescription drug spending is only 10% of US medical spending) is that the US essentially subsidizes pharmaceutical development for the entire world; see here for that explanation.
Just about every other source I can find on health care costs make it look like the rate of US cost increases has been greater in the 90’s and 2000’s as well.
In fact, looking at McArdle’s graphs, her data suggest the same. Look at 1998-2002 — there’s a huge increase in US health care spending that doesn’t show up for any of the other countries in the graph. She kinda skirts around this by using “even if” language, but that seems like a red herring to me. Yeah, even if the costs increases were the same, US would be paying more. But the cost increases weren’t actually the same and the US is paying proportionately more still.
She also does a fishy thing where she separates government from private health care expenditures in the US and assumes that under single payer the whole thing would be government expenditures and therefore worse than the combined government/private expenditures. This ignores the impact of the structure of the market on costs and prices, which…seems crazy to me.
The increases were lumpy, so a four-year trend tells us little. The long term trends are clearly what McArdle says they are.
McArdle’s whole point is that the structure of the market makes no difference to spending trends. To simply assume that prices would immediately fall ignores every empirical data point, including when Vermont actually attempted to implement such a plan and immediately discovered the opposite was true.
The US just does a lot more spending at the margins where benefits are smaller, and OECD levels of spending are probably rational given how much poorer they are than the US. There’s an argument that US levels are inefficient, but socialists rarely make that argument because Americans are horrified by ideas like “death panels” deciding who gets care.
Hi Scott —
I think many of your points make sense. But I want to extend your second, fifth, and sixth points to show how markets, risk tolerance, and insurance (which I’m going to use instead of ‘regulation’) are connected.
One way to do this is to point out that, if you believe economists and actuaries are capable of doing a good job of assessing and evaluating risks, that certain things aren’t more expensive today: it’s just that past costs were hidden because they weren’t identified, enumerated, quantified, and incorporated. So, what was, decades ago, a significant but invisible risk borne by, say, a single patient in a hospital somewhere, is now spread out across the population; or rather, the risk borne by a patient has been shifted to the doctor, and then spread across the population of doctors. Ditto colleges that used to not be required, by sheer financial prudence, to employ campus medical and mental health staff, VPRs, HR professionals, and so on.
Now: the question is *why* has this tendency toward capture emerged. Above, I read you to suggest it is because we have become less risk-tolerant through some emotional or moral change in character, which aligns with the usual (and usually grossly inaccurate) parables about e.g. hot coffee lawsuits. I would instead claim that it is because markets are working, just not in the way that people think they do.
The trend toward regulation (if you want it to sound bad and scary) or insurance (if you want it to sound prudent and technocratic) is driven by the same force that drives many things in the economy: it’s big business! The financialization of the economy, and of possible derivative economies, is the market at work, seeking new ways to extract surplus value now that the old ways (for example, starting a war with a foreign nation so they are required to buy your exported opium without you being required to give them much in return (or, if you prefer something more current, enforcing near-slavery conditions of rare earth mining that produces smartphones sold back to the miners for more than they made for mining its components)) are seen, to a greater or lesser degree, as abhorrent, along with the general tendency of the rate of profit to fall. Meanwhile, finance capital is abstracted from physical and labor capital, meaning it can be generated and manipulated and derived more quickly than traditional forms or within traditional markets.
In other words, higher costs, as the result of more regulation/insurance, and concomitant with lowered risk tolerance, is not because markets are breaking down, but because markets are working as they are supposed to work: to expropriate value from the holders of capital, particularly this case financial capital, to the economic detriment of those subject to its actuarial terms, analogous to how more primitive forms of capital accumulation relied on the expropriation of value from slaves or the captive earth.
Cost disease is a symptom of advanced capitalism working as expected/predicted. It is only a disease to those afflicted by it; to those who benefit from it, i.e. the broader finance-insurance-regulatory regime, it is the engine of profit itself.
The bit about campuses hiring mental health professionals reminded me of Jonathan Haidt’s recent writings on the topic. Since colleges have more mental health services, you might expect mental health outcomes to be better for today’s students. But the opposite appears to be the case. I was going to bring up rates of suicide, but then I remembered that suicide is a far bigger issue in the military now than it was in the past. Greg Cochran might attribute our military’s troubles to listening to psychiatrists in the first place, as totalitarian governments got better performance without them.
Personally, I attribute it to a general decline in macho masculine culture, though I don’t have any data to support it. But I tend to think that the brutish, insensitive, chest-thumping culture traditionally associated with warrior males serves a very important purpose in desensitizing them from the horror their actions are inflicting. In civilian society, this is of course a huge negative. But in a military, it’s a desirable quality to the extent that you want your soldiers to be able to kill the enemy without suffering mental breakdowns. See also: Black Mirror.
Ding Ding Ding!!! We have a winner!
Explained in gory detail here:
Underpaid/free labour (by women and black) that was very common back then. Much less now. This sosiety shift alone could double every price in economy, while keeping wages stagnant.
I.e. household income rose even if wages did not.
The cases of women & blacks seem rather different. There’s been a large entry of women into the laborforce, while employment among blacks (mostly men) has declined.
My view is that we are actually much poorer than we think (combination of all official measures wildly underestimating inflation and overestimating growth with each serving to mask the other)–maybe even poorer than our parents, if probably not our grandparents–a fact which is obscured by dazzling technology.
There does, however, seem to be a generalizable version of Parkinson’s Law (“work expands to fill the time alloted”–>”costs expand to expend the resources available”), especially in cases when the consumer is disconnected from the cost-benefit analysis by a third party in some way.
If, for example, you offer a huge subsidy to help people buy x, but pay the providers of x rather than just giving the consumer extra money to spend as he wishes (though even that will push up prices somewhat generally, of course), the cost will tend to rise such that no x which costs less than the value of the subsidy will continue to exist, though not as a result of spending more on the things the consumer would have chosen were he making the decision directly, or, I think, even more commonly on precisely the things the consumer would choose to spend more on if he were required to spend more on a thing than he really wants to.
That is, given a choice between a $5,000/year elite education without a lot of fun activities and material comforts and a $50,000/year elite education with a lot of fun activities and material comforts, most students will take the $5,000/year education. But if all elite educations cost $50,000, the student will chose the one with the awesome activities and material comforts over the one with better paid professors, because ultimately they’re in it not for the professors but for a piece of paper saying “recipient of elite education.” If you force them to spend $50,000 to get that piece of paper they’d rather have a lot of fun doing it than go to class with happier, more erudite professors.
To make a Bryan Caplan-ish argument: maybe the problem is that we are, actually getting more of what we want, in many cases, but we don’t want what we think we want (we think we want job stability, respect, time with our children, etc. but actually want super soft toilet paper, giant houses with central air, SUVs, private hospital rooms, etc.), especially when we are actively discouraged against thrift.
Spelling: heterogenity -> heterogeneity
This is really scary. And while it might be worst in the US this can be seen elsewhere too.
The money has to go somewhere. As they say money is not lost it just goes into someone else’s pockets. Thus if we could track the big money flows (something I’d love to see) we could just apply network flow algorithms to see where it goes. With such a thing we could tease apart the given candidate explanations:
* First – if it were some inflation misadjustment we would just see a uniformly scaled network.
* Second – we’d just see larger and larger flow from customers into the network but uniform from there on.
* Third – we’d see it in a relative change in the sub networks of public and private sector.
* Fourth – I’m not sure how you’d see that in the network – maybe total internal flows in sub nets.
* Fifth – relative change in flow to law
* Sixth – my favorite, but I don’t see how that effect might show up in the network
* Seventh – we’d need to have distinct tracking for differnt economic groups
* Eighth – flow to pension (might require distinct flows by age)
* Section V, “the rich” – we’d see it in network flow toward the rich
Other possible explanations:
* Other reallocations within society, e.g. between age groups, economic groups, social groups
* Global monetary flow e.g. to from other countries, change in flow between economic sectors
Problem: We don’t know the money flow network and there seem to be incentives to keep it unknown. But maybe it can be approximated or the dark flow be bounded?
This isn’t secret, it’s just an exercise in data analysis. We don’t even need network flows, we can see the answers directly.
For public grade/high schools, we know basically exactly where money is/goes; https://nces.ed.gov/programs/digest/d13/tables/dt13_236.10.asp
For health care corporations, they publish their income statements – we can see their total income / profit levels, and similar; http://www.justfacts.com/healthcare.asp
People do the analysis, and usually they come to conclusions that it’s a lot of things all working together.
This is “only” the part about the educational system. It doesn’t tell you where the money goes further down the corporations. What ends up in whose pockets. You don’t see that. Also not the cross connections between clusters (law, health, finance). Sure lots of data is there. Pulling all together is the hard part. I am yet to see a diagram about these flows. I once saw one that took the flow in German health care apart – but also in the public sector only.
I think you are dismissing comfort levels too easily. Campus services, private rooms in hospitals, etc. If today’s college students have grown up in large houses with well-stocked fridges and driving everywhere, they will expect something similar in college. They will not put up with draughty halls, 1940’s cafeteria food, cycling and a minimal administration that ignores complaints. Likewise boomers with large comfortable suburban houses with not accept a bed in a crammed hospital ward.
This may look like complaining that people have got to comfortable, weak, etc, but it may be perfectly rational choises. If it is ok to spend money on SUVs, comfy sofas, big houses and fancy food, why not on campus gyms and private hospital rooms? People have gotten richer over time, as you see in the GDP/capita growth, and they will spend that money.
I think it is similar with playgrounds. back when many children got polio or TB it was hard to care to much about playground injuries. Today those diseases are gone, so people chase smaller risks.
I think this is a reasonable take in some cases. Doesn’t (directly) explain some things like subway tunnels though.
To continue on this theme, teacher salaries may not have increased much, but there are now more well-paid alternative careers, especially for women. It wouldn’t surprise me if the average teacher is less intelligent today than a couple of decades ago. That is not meant as a slight against teachers, it is just people responding to incentives.
For subways: Could it be that people today will not put up with the kind of back-breaking work of building a subway in 1900? Especially not intelligent and capable people who have much better options. You could hire the button 10% to build subways, but can you trust them with heavy equipment?
Virbie has an interesting story above about an efficient and well-managed hotel in India. I think the hotel owner is rather intelligent, but because of few opportunities he is running a cheap little hotel. In America he would perhaps be running a far more economically productive tech company. The hotel would be part of a chain, managed by someone less intelligent who isn’t given much autonomy and is expected to follow standardized procedures.
Scott mentions South Korea as a comparable country with cheaper subways, but Korea still has a lot of poor people. It has developed quickly, but there are still many people who could have benefited from education and more productive employment, but instead have to build subways. I think South Korea will see costs boom as that generation gets older.
that’s a good point…a campus today has vastly more amenities than a campus in 1940..why are people surprised that it should cost more too?
It is possible that with an increase of available amenities, we have come to rely on amenities for fulfillment, leading to a self-reinforcing cycle? Listen to a retiring professor wistfully recount the old days, and you get the impression that some mixture of drinking, reading, chasing tail, conversing, writing, pranks and fairly unstructured sports was sufficient for thoroughly enjoying college. We might have unlearned the art of enjoying ourselves in those ways, so we use resources/amenities as substitutes to get to the same level of satisfaction.
I think it is similar with playgrounds. back when many children got polio or TB it was hard to care to much about playground injuries. Today those diseases are gone, so people chase smaller risks.
In my current job, I’m back in education (an early intervention service for children with special and additional needs). I’m only clerical support (thank God!) and our service is one of the ones selected to implement and trial these recommendations. We have sixteen different areas to cover and we have to provide detailed breakdown of how we’re complying and using these standards. (I’m going cross-eyed typing these up and pulling our policies etc. from everywhere to show we’re compliant and turning it all into a readable document and not a brain-melting unholy tome out of a Lovecraft story).
I’ll give you a sample of the headings for one area, Environments. This is one component of eight in one section of sixteen:
My problem with this is that I’m paying roughly $2000 a month on food and housing, both of which I must buy from my college. If I had the option to rent an apartment and buy my own groceries, I’d do it in a heartbeat and laugh all the way to the bank.
Could you not have found a college that offered a similar level of prestige and expertise in your chosen field that did not have these restrictions?
Honestly, probably not.
Prestige is a tricky thing to quantify, but as a culture we have decided having attractive dorms and other facilities is one of the things that “prestigious” universities do. Then if they have those facilities universities often designed the campus in such a way that attending students need to live on campus.
Almost by definition, prestigious institutions will have those restrictions.
It’s messed up, but from what I have observed that’s how it is.
Our host studied in Europe. I honestly don’t understand why it is that basically every American not at least tries this. Many European universities are for some reason basically begging for foreign students.
(Granted, being European myself I don’t have a good feeling for the prestige issues involved. I do believe though that the academic quality is at least comparable.)
Some of the difference between health care pricing in the 1970s and today is because the modern technology / medications cost a lot to develop, and there’s a price increase built in to compensate for that; the price to physically build the device or synthesize the medicine in a lab is lower than what it would be if you factored in research costs. You also have the cost of training nurses and doctors to use new machines. I doubt all this explains a large fraction of the increase, and for medication it mainly applies to branded medications still under patent, which excludes drugs developed before sometime in 2000-2005 (roughly) with a tiny handful of exceptions that got special extensions. I can’t imagine it explaining none.
Doesn’t change the main point, of course. We’re still facing a price increase well beyond what’s truly necessary. But there costs to technological advancements that require extensive research, and these aren’t always obvious from the current production costs.
does anyone know an easy way to get comparable data from common law vs non common law countries?
would be interesting to know if commonlaw countries are consistently experiencing this effect more than others and if so what part of the effect might be due to risk of being sued to bankruptcy and no easy way to put a number on that risk, as some commenters pointed out.
I discussed a little bit of my initial thoughts on why I think the Cato graph is misleading.
It looks like Scott demolished a large part of my argument. But I still think the graph is misleading for the reasons I mentioned.
Scott, citing that Paul Campos piece uncritically undermines your claim about college. I rebutted it at length, but I’m afraid the rebuttal went down with my website. Total student state expenditures are indeed up… because we’ve vastly increased the number of students going to college. Per student state expenditures have dropped dramatically. Campos knows that. When challenged about it he’s evasive, because he knows it rebuts his claim. Of course when we increase total enrollment by a third in 10 years – thanks to a bipartisan political effort – you’re going to get higher total expenditure. But per capita expenditures is what matters.
Otherwise good piece. Of course college is getting far too expensive far too fast, but let’s be clear that total state college expenditures are rising because of a dramatic increase in attendance that has only recently leveled off somewhat. (Which does not undermine the point that college is far, far too expensive, even aside from it being inadequately funded.)
(ps a spectre is haunting Scott Alexander….)
You ever see this, Freddie? http://www.rhizzone.net/article/2016/01/27/article-review-book-review-manufacturing-consent/
There’s a special place in heck for the campus liberals who traumatized Alexander, because he’s been bumping against the obvious case for socialism without being able to cross the line for a long time. It’d be amazing to see such a powerful and prolific intellect finally break its chains, but I’m not sure what could manage it at this point
I’m sorry – what is the obvious case for socialism and where in the article you linked does it appear?
Here is your own writing, I assume you were referring to that article.
>Per student state expenditures have dropped dramatically.
Establishing whether this is true or not seems massively important. Anyone got thoughts and evidence about the claim?
For a look at California’s state spending on higher education per student, look at page 5 of this report.
I don’t necessarily endorse the conclusions of the report, which are very hand-wringy special-interest pleading. But the data in that graph is solid and absolutely confirms what Freddie is saying. In constant 2010 dollars, California appropriations per student in UC were halved between 1965 and 2012.
This doesn’t have any bearing on whether or not it’s good policy for the state to be subsidizing higher education in the first place. (I’m sure Freddie would say it’s an unqualified good; I’m extremely dubious.) But it does mean that examining tuition is not a good metric for overall college cost because subsidies have been changing dramatically in the background.
It’s also worth noting that someone citing state college tuition is probably being manipulative. A bunch of states have slashed government subsidy of state colleges, with the result that tuition has risen towards private levels without an increase in per-student spending.
The other important factor when looking at state schools is to consider the balance of in-state versus out-of-state students. My wife worked in the UC administration up until a few months ago and has talked in the past about how the UCs are deliberately trying to expand their foreign student enrollment because they pay full freight and are subsidizing the in-state students. This has become absolutely crucial to undergraduate education in particular as state spending has decreased (in absolute dollars, none of the usual “failure to increase spending is a ‘cut'” crap) over the last decade.
There’s currently a bill pending in the State Senate that would cap the maximum number of out-of-state students enrolled in the UC system to 10% of the total; it would be a disaster of epic proportions for the colleges. The idiots backing it think that foreign students are “stealing the place” of in-state students; they don’t realize that the universities wouldn’t be able to admit nearly as many in-state residents if the out-of-state money wasn’t picking up the bill.
Don’t worry, the Feds will render it moot with immigration policy.
My understanding is that US subsidies have greatly increased, more than offsetting any state decreases. I surfed the internet a bit and couldn’t find a source, but I think it is true. I think most of this consists of the large education credits on Form 1040, which didn’t exist 20 years ago.
Irrelevant to unit costs, which are still spiraling out of control for no good reason, and are the primary concern.
1. Consider Jevons Paradox. Returns to education based on the changing economy could make it more valuable, even if the experience isn’t that different. And we’ve got a high material standard of living, so spending more to live longer but hitting diminishing returns might be expected.
2. I mostly experienced malpractice from the dark side. Malpractice attorneys will describe the situation very differently than doctors. While perhaps in part, I don’t think this difference is fully explained by attorneys being pure evil. Specifically, they might take issue with your description of the system as leading so easily to millions, as if it’s a really great lottery ticket. Moreso, they might insist the system would be less oppressive if, for example, surgeons could somehow be induced to actually wash their hands and not leave surgical instruments in people. I know, most doctors will insist those are solved problems, and probably never read a handwashing compliance study, or spent much time with anyone who has carried a sponge or scalpel around inside of them for a while. Both sides have a bias, and while I’m reluctant to argue to moderation, I feel like the biases are generated by selective experiences on both sides, so I think it’s a reasonable heuristic to expect the truth to be somewhere in the middle here. Maybe I’m wrong though…
Could it be that doctors treat this like the general population treats terrorism? That is, a doctor doesn’t have a 5% chance per-patient of being found at-fault for $100. Instead, they have a 0.001% chance of being hit with a multi-million dollar judgement. That kind of thing can strike terror into a profession, even if it never happens to them personally. You only need a couple of borderline cases to scare the crap out of everybody.
There’s been an ongoing battle there between doctors and economists. The economists keep failing to find much evidence to support lawsuit risk as a driver of increasing costs. The doctors insist it’s crucial, and if economists’ methods don’t find it then so much the worse for the economists’ methods.
I’m pretty sure Scott has written about it before, but couldn’t find it in a quick search.
Hmm, interesting way to look at it, thanks for the reply!
To be clear, I think both professions are at risk of some bias.
The malpractice attorney will see the checks and balances eliminating some frivolous cases every day, and that routine will make the properly disposed cases the more typical memory than the occasional frivolous case that slips through. So the attorney could under-rate the harms and frequency of frivolous cases.
The doctor will talk about the one frivolous case that slipped through and ruined a colleague’s life, because the high harms will make that more salient, and they will over-estimate the probability of such a case. (Even if a doctor gets caught up in a frivolous case that’s quickly dismissed, they’re more likely to view that as evidence that the system nearly failed than as evidence that the system is good at filtering out bad cases.)
So if each side is poorly calibrated, and in opposite directions, is there any improvement both might agree on?
One possible improvement for both sides would be if we could convincingly improve the “sorting function” that distinguishes frivolous from serious medical transgressions.
“Improvement” could mean greater consistency in hard cases, faster dispositions for frivolous cases, or some of both.
Precertification of cases before they proceed that can be handled by electronic submission of documentation might help avoid disrupting doctor lives in the most frivolous cases. Machine-learning might even be helpful at all stages to improve consistency of adjudications.
Either way, this discussion is typically punctuated with the extreme easy cases, but any notable improvement in consistency probably involves focus on the borderline cases, the ones that are hard to decide. Those strike me as under-discussed.
Another facet of our justice system that non-experts find odd are punitive damages. These are designed to shape behavior, but are described as a windfall for the plaintiff. They’re a powerful tool to encourage preventions that are far cheaper than cures. However, if punitives were more commonly redirected, not to plaintiffs, but under a judicial order to funds that help prevent similar harms in the future, then they might not strike so many non-lawyers as unfair. The purpose of punitives would be more intuitive.
(Sometimes people get confused about punitives because they cite exorbitant jury awards and ignore that those awards were later reduced by a judge. I don’t have a solution for that except that people should try to get all the facts first.)
I just feel that people treat this as zero-sum between doctors, attorneys, and patients. It strikes me that there are possible pareto moves that are under-explored.
Let’s look at 2 possible cases. The first is purely hypothetical, and the second is a highly modified-for-privacy-and-liability-to-the-point-it-doesn’t-make-sense reasons factual case.
Crazy guy goes to the ER for strep throat. Gets typical/appropriate antibiotic treatment. Files a lawsuit that the doctor didn’t provide informed consent of side-effect of his teeth itching. Hospital/insurance settles for $1000 and a tray of free turkey sandwiches.
Photogenic woman in college is studying to be an aeronautical engineer, with the goal of being an astronaut and the first human on mars. Starts to experience weird signs/symptoms, but only once in the presence of a medical doctor. ER doctor asks a about a family history (parents, siblings) of bleeding disorders. Doctor names off the top dozen-or-so bleeding disorders. Woman and mother say “no” to all of them. No idea what’s going on, but not obviously dying right now, sent home. Multiple presentations at the ER, but no obvious cause. Well, it turns out that the woman has a bleeding disorder and as a result her kidneys fail and now she needs to be on dialysis. Nobody on dialysis is going to be an astronaut, so the dream of going to space is crushed, despite being otherwise able to live a normalish life. It turns out that the woman’s sister also had hemophilia, which was one of the medical conditions explicitly asked of the women. But, since she’s photogenic, everybody’s afraid to risk going to court so there’s a settlement for multiple $MEGABUCKS.
In case 1, the overhead for everybody involved is pretty minimal. It’s a clearly frivolous lawsuit, but worth settling for nuisance value. Nobody’s going to suffer from professional harm from having too many patients who sue over their teeth being itchy. There’s cost there, but it’s pretty constant, and requires little stress or time by the doctor. Perhaps and hour to read a summary of the complaint from the lawyer and to respond with “he’s crazy”. It’s annoying, but few doctors are shaking in fear of this kind of claim.
In case 2, the doctor involved did due diligence. He asked the appropriate questions and was given incorrect information by the patient and the patient’s mother. If he was told “yes”, the cause could have been found before permanent harm was done. The alternative would be to do moderately-expensive testing for any suspected condition, even though the suspected conditions have a strong genetic component and would be expected to show up in the rest of the family if present and they deny having them.
Here, the fear is that a jury, faced with a sympathetic, photogenic patient with a bad medical outcome and a life’s goal crushed will vote against a doctor/hospital/insurance company, despite all of the core points of Standard of Care having been met. It’s (merely) a bad outcome with a rare disease.
Your cases are pretty far fetched.
Here is what is more likely.
Crazy patient comes in with a sore throat. Strep culture is negative but patient becomes belligerent and is given antibiotics because he is demanding them. He has mild side effects and attempts to sue the doctor but the lawyers refuse the case because there is no significant financial loss so the patient just spends his life maligning the doctor and complaining about modern medicine.
The young woman comes into the ER complaining of a headache. She does not have any other major findings. The ER doc thinks, “this is the third headache I’ve seen on this shift and they’re just trying to get pain meds” He writes a script for pain meds and sends her home. She has a stroke later that day and has lasting neurological damage so she will never be an astronaut. She sues the doctor and hospital and wins. The doctors and hospitals decide that everyone with a headache needs a CT scan and the patients cheer because it is clear that the doctors now really care about them. Except for the minority of patients who complain that doctor are just doing the CTs out of greed because they want to make money off of vulnerable patients.
LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY, AND WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.
The issue is, hardly anyone pays out of pocket for anything anymore. There are so many programs to avoid or defer paying: medicaid, medicare, free emergency room treatment, generous student loan grants and financial aid for everyone of almost all income brackets, and student loan forgiveness.
It costs $200k-$1,00,000 to treat leukemia..and the average household has negative net worth…even the uninsured get the same treatment…obviously, the typical patient is not paying that, so it gets added (along with other cancers and costly diseases) to per-capita medical costs.
Furthermore, it’s very hard to collect on defaults. Costs may be rising at a rate greater than inflation also because people are requesting more total education and more total healthcare…instead of going to the hospital for life-threatening stuff, they go because of a stomach ache or a small cut, as well as more spending for elective procedures (nose realignment and stuff like that). Colleges and high schools have more elective programs , bigger campuses, more computers, more courses, more staff,, etc. than in the past. As the preceding examples show, quality is going up -colleges have vastly more courses and opportunities than a generation ago. Some colleges have AI and robotics labs,..stuff like that.
Medical care has become vastly more advanced too: some cancers can now be cured and survival prolonged for many others…treatments for so many diseases have gotten better than generations ago. Imagine if we could extrapolate the chart of medical costs to the medieval times…yes it would have been cheaper but also competently useless (actually iatrogenic ) in terms of quality of care (as anesthesia did not exist and they believed in things like ‘humors)’. Does this answer the question? Probably not by a long shot, but just a perspective. IMHO, when you adjust the costs in terms of out-of-pocket costs vs. sticker costs, quality of care, and innovation, perhaps it’s not so bad.
Government-level bureaucracy and waste, as well as inflated medical device and drug costs and billing by unscrupulous companies probably also plays a role although it seems intellectually lazy to to blame everything on that.
This is one of those issues that seems impossible to ever resolve and is something we will just have to learn to live with.
This argument that it’s about technological progress and associated cost would make sense if the US was living in an age of medical miracles and the rest of the OECD countries were still living with 1940s-level medicine. But that’s simply not the case.
You can definitely make an argument that we’re subsidizing the R&D for the world, because treatments are developed with the expectation they can be sold in the US at full price to offset the capped prices they have to be sold at everywhere else. But that’s different from saying that the inherent cost of treatments are the driving cost factor.
It’s more true than you might think. The rest of the OECD does not, for instance, generally have hospitals with private rooms, and the US does several times more diagnostics and transplants and has stricter standards. We can argue those expenditures are not efficient, but that’s a tradeoff.
If prices don’t go up, what happens? The system unravels. If dollars gain value sitting in your pocket you sit on your dollars, especially if you’re a bank.
If we didn’t have the growth of the real estate, healthcare, finance, and education sectors, where would the economy be? This is what is keeping us afloat. If these industries stopped growing in revenue nothing would automatically pitch in to fill the void. Remember that your spending is someone else’s income. This is what pumps the lifeblood of the economy and keeps everything working, including really important stuff that you can’t live without.
Economies need money pumps. Inflation is a money pump. Taxes are a money pump. Economies cannot stay afloat simply on the ‘voluntary,’ if you like, spending of people who are not pressured by anything besides the desire for goods and services. People are too happy to sit on their cash, and since their spending is another person’s income, this leads to people struggling to make ends meet and being even more reluctant to part with their cash. It’s just totally nonfunctional. The alternatives are all much more ‘ugly’ than the dream of a society based on voluntary unpressured interactions, but they have the advantage of actually working.
When you take a hard line on low taxes & free markets, you get low taxes and high mandatory payments of another sort, and ‘free’ markets. Cost of housing isn’t a ‘tax’ but good luck not paying it. Education and healthcare are a little more gray but it takes a really thick skull to claim that people can just take them or leave them in the modern world. It’s pretty much involuntary spending. Hell, if I get taken while unconscious into an ambulance they will hand me the bill even though I didn’t agree to pay anything.
There’s a place for purely voluntary, transparent transactions, with negligible trickery or coercion involved. That place happens to be fairly small. People need to just accept that for technical reasons economies have to be top-down to a pretty large extent, and decide whether they want their masters to be formally designated as such or assigned on an opaque ad-hoc basis, responsible in principle for their well-being or responsible in principle for milking them for as much cash as possible.
If these industries stopped growing in revenue nothing would automatically pitch in to fill the void.
But what Scott seems to be asking is “Where is the money going?” It’s not going into wages, it’s not going into reduced working hours, it isn’t all going into lining the pockets of the shareholders or investors.
You can’t spend what you don’t have in your paypacket, and if the money is going into buildings not wages, then it’s not raising incomes to drive spending to pump the economy.
Where did the money go? I’d hypothesize capture by the professional/managerial class. (And yes, like probably most of you, I’m also a professional, so this is self-criticism). At the very highest level, this about CEO’s and bankers approving themselves YUGE bonuses and salaries (and hence why productivity gains don’t show up either in profits or in working class / peon salaries). Now of course the surplus has to be there first, but public companies really aren’t about profit maximation. For CEO’s, the main thing is to meet analyst expectations. They manage this both by talking the expectations down, but also by spending surplus on themselves or vanities.
In non-profit enterprises like the government that also control salaries, this just shows up in different ways – most common is in “empire building” by senior managers, or by investing in projects with psychic reward – they are “leading edge” or politically favorable, or virtue-signalling, whatever.
At lower levels, or even other high levels where you can’t set your own salary, the cost shows up in other ways – usually by a mid-level person who can capture some surplus begins “building their empire” / hiring lots of junior staff. They benefit both by less unpleasant work for themselves and through those psychic benefits that come from being the boss, but also by providing a security gap. I.e., when layoffs or cuts periodically do come, then I have a layer of staff I can let go without my own job being at risk. This of course assumes they are able to capture some surplus, but it’s not that hard – budgets are sandbagged, you luck into more revenue, etc. Or maybe it’s like my daughter’s doctor who pushed for the procedure he’d get paid for, vs. the slightly better one for her that someone else gets paid for.
Those of you who are economists might expect that this would show up in higher administrative costs, but you are really underestimating professionals to think this. Professionals are (usually) way too smart to classify their extra hires as admin. Example – I could probably finish the project with 2 people, but I can get a budget for 5 and an extra six months. Do I hire an admin assistant, or just some extra junior people to have around?
The main assumption here is that professionals are best positioned to capture surplus (vs. labor or capital shareholders). Given their inside knowledge and general savvy, I think this plausible. I’d love to see any studies on this topic – it seems to bear out in income disparity stats as well as why not all returns go to capital as Scott points out, though I probably have some confirmation bias going on there. The other big assumption is that there is lots of surplus – but one way to read Scott’s stats are that the costs are just surpluses accrued by large organizations.
Supporting anecdote in the other direction – our kids go to a private religious school with $5k/kid tuition and good results. Costs are low basically because the woman who runs it is a saint – she’s happy with her only possessions being her books and 10 dresses in the closet of a very small house across the street. It sets the tone for the whole institution, which doesn’t spend on niceties most other schools (private and public) expect. If only we could replicate sanctity 😉
It’s going into wages, in very large part. Obvs. Someone described a corporate dynamic where you want to hire people because that means you’re in charge of more people, and thus more important and prestigious. This is a better way to burn money than increasing wages, which makes the people you’re in charge of more prestigious! Disaster! Better give yourself a healthy raise and hire a bunch more underlings.
Now, if you follow any nurses or doctors you’re aware that under-staffing is a big problem. How can this happen when employment in that sector has quadrupled as the population only rose 60% or so??? Well, clearly the extra people aren’t doing anything particularly useful. Which shouldn’t be that surprising, given how they got there.
Although come to think of it the proportion of old people has grown considerably, and old people produce an astounding amount of useless sisyphean busywork in the health sector. Perhaps the ability of the system to reject at least some of this useless busywork has eroded as lawyers have pushed their spigot deep into its juicy flesh. And the workforce is getting older and probably less vigorous as well. They might just legitimately need way more people to do a worse job. Of course, if all health care workers just went straight to work at 16-18 instead of wasting several years and tons of money on mostly useless education that would help things considerably.
Health care workers want to keep the price of admission in years and dollars high in order to avoid invalidating their investment and keep job security – the best way to maintain a scam is to give all the suckers a trickle of the dough. Naturally the ‘education’ shysters make that effort vain by swallowing more and more people into their system, often with government’s help. Seeing the value of their credentials erode just makes everyone hold onto what is left even tighter. I wonder how far the scam has progressed since the 60’s… How many years and dollars were spent by the average health care worker then?
And the long periods of stable prices and economic growth in the 19th century were impossible?
The reason not to sit on your dollars in a world of stable prices is that the real interest rate is positive, so you make more money lending them out.
How did people make said cash? These arguments are always ciruclar “people have plenty of money, and won’t spend most of it!” “How did they make that money?” “Someone else b ought goods and or services from them!”
How do you get an economy with people able to earn more than enough money to cover all their needs and wants AND have an economy where people are sitting on big piles of cash?
Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000?
They didn’t and I didn’t, though unlike my father I did not leave school at fourteen to start work and help support the family 🙂
The big IF there is the “modern technology” and I don’t see any way around paying more for that. Maybe not at the same rates as the increases of today but certainly not at the same prices as “their parents’ generation”.
And no, I don’t want the same amount of health care as my parents’ generation, because that was “nothing”. My mother had a young brother who died in infancy of what, looking back at it from the space of thirty years, was probably leukaemia – but it was diagnosed merely as “failure to thrive” and even if it had been diagnosed correctly, there was nothing that could have been done. My mother also had two sisters who died in their 30s/40s leaving young families behind; I’m kind of shaky on the cause there because the only time I heard it discussed something about whooping cough was mentioned and that’s generally only fatal for babies. TB was the big killer in Ireland at the time. Our local regional hospital started off as a TB sanitorium (the reason the late Dr Noel Browne has iconic saintly status in Ireland is that he tackled the TB epidemic and reduced mortality rates by 90% so that TB is more or less a disease of the past).
So no, I would not go back to my parents’ time in health care, even for an extra eight grand a year in my pocket.
Some new tech is, some isn’t. Fancy imaging work in particular looks like a completely new line item, but I doubt new drugs (which are most of what Scott’s talking about) are going to be much more expensive than old ones on average — synthesis might involve some complicated organic chemistry, but IIRC most of the real money goes into R&D and compliance. New surgical techniques are probably somewhere in between, depending on whether they involve e.g. robots and endoscopes.
Of course, that “compliance” line is going to be a big deal — a few years ago I interviewed with a maker of medical electronics, and it quickly became clear that half the time or more of my job description would be crossing bureaucratic Ts and dotting bureaucratic Is. I didn’t take the job.
Well, I WOULD prefer to have health care equivalent to what my parents would have had in, say, 1975, compared to today (assuming availability of medication that is now generic, which might be 25 years more modern than 1975.) But this is because most health care in the USA today is completely unaffordable to out-of-pocket payers. All I can get is prescriptions for some medications. If it can’t be diagnosed cheaply and treated with generic meds, I don’t get any treatment at all.
Like, yes, severe pneumonia would be more likely to kill a rich person or someone who qualifies for health care assistance in 1975 than 2017. But it’s more likely to kill those of us without a lot of money or health care assistance in 2017, because we don’t have enough money to treat it beyond possibly a low-cost generic antibiotic and going to bed early. IVs, hospital stays, etc. are completely out of the question. Older-style, cheap care would improve the options that we can realistically access.
Not only that, but for a lot of poor people, and extra $8000 (or $6000-ish, given the extra costs of having somewhat improved meds and devices available) decreases their need for health care by improving nutrition and safety and keeping them from working themselves to exhaustion. I know people who work three jobs to survive (not all full-time of course, usually 1 full time and 2 part time). I have one full-time and one part-time myself, which isn’t ideal but stinks much less than working three. $6000-$8000 would enable those with multiple jobs to eliminate one of the part-time jobs, or turn one of two full-time jobs into a part-time job, reducing the stress and fatigue that aggravate health problems.
I can confirm that the cost breakdown of University has a lot to do with “student life” stuff. Students might not, if asked point blank, prefer this to 1975 college, but the point is moot because Universities are locked into a competition with their peers, and happening student life is very visible and a good recruitment tool.
I worry that if you see this type of cost increase in a lot of disparate areas it might have to do with old intractable problems like the principal agent problem (although I might be a bit like Grandpa Simpson and see the principal agent problem around every corner). In other words, the mechanism for cost increase is broadly that we ‘conjure up’ entities to do various tasks for us in a modern civilization and those entities start to self-preserve and expand, with associated cost increases. And nothing really ever stops them, because they explicitly expend resources to justify this, every time. The problem with this explanation is why do the graphs look like this _now_ and not previously?
Or perhaps this is related to some sort of “society physics” type of law where the only equilibrium is a large mass of people living on the edge of starvation with a small group of hyper-rich elites — and all moves are moves towards this equilibrium unless explicitly stopped. The problem with this explanation is we need to posit situations where this trend gets reversed (otherwise we would already be at equilibrium).
Re: society physics.
What does SSC think of The Refragmentation by pg?
tldr Inequality is the logical conclusion of technological progress. Except WWII begot the Hansonian Dream-Time.
ps Dream-Time is an old classic from 2009. But Both Plague & War Cut Capital Share? which Hanson posted yesterday also seems relevant, though I haven’t read it yet.
One theory I would put forward is that modern communications have made increased centralization (and therefore more powerful agents) much more feasible than it was 50 or 100 years ago. Large, bureaucratic national agents are much more opaque and less accountable to their principals than smaller, local agents. So while principal-agent has always been a problem, the scope to which the problem can expand is exponentially higher due to technological progress.
So a sort of exploiting of consumer irrationality then? What causes someone to pay for college, is different from what they actually want out of it? Do we see this sort of phenomenon elsewhere, where costs are going up due to costly spending on what is effectively just functioning as advertising?
Empirically false, the “hyper-rich elites” are not stealing resources, they are creating them — e.g. Mark Zuckerberg didn’t make anyone poorer when he became a billionaire by inventing Facebook, and nearly all gains are eventually captured by consumers, the natural and inevitable result of consumers constantly seeking better value propositions in free markets.
The exceptions to this process generally must involve rents and coercion, because rational actors don’t voluntarily make trades that leave them worse off.
I like that you started a comment with “empirically false” that was actually just an attempt to relabel the original claim in accord with a different political stance.
No, it’s really empirically false — there are no examples of any “type of law where the only equilibrium is a large mass of people living on the edge of starvation with a small group of hyper-rich elites.” It’s never happened and can’t with free markets for the reasons I noted (hyper-riches come from expanding markets, and consumers don’t choose to make themselves worse off).
Groups of homo economicus don’t choose to make themselves worse off. Groups of homo sapiens have fertility inversely proportional to income. You don’t need any plutocrat thieves to end up with starving masses, you just need y’ = k*y.
Did you read what I wrote? Where did I mention theft? I meant an equilibrium of resource distribution.
Humans are not rational actors (actually this blog is to some extent a spinoff from a community that tries to think about cognitive biases). Humans make trades that leave them worse off constantly, for a variety of reasons (soft coercion, stupidity, finite horizon issues, culture, etc.)
If you are empirically minded, this post’s original question is a good one to ponder (since it relates to something we are actually observing). Another question it might be worth pondering is why CEO compensation in the US has been behaving in the way we have been observing since the 1980s.
The hyper-rich elites would have be stealing their hyper-riches if they didn’t make people better off in voluntary exchange, and if they did make them better off it’s hard to see how the masses could be left starving.
Whether humans are perfectly or even barely rational doesn’t matter in economic terms, either way they are generally trying to maximize utility. You or I might not think it’s rational for LeBron James’ ability to play a child’s game of ball-in-hoop to be valued in excess of a billion dollars, but others disagree. We might think it’s fairer to give that money to someone else, or that people would be better off spending money on something else, but good luck convincing his fans of that. Utility lies in the eye of the beholder.
CEO compensation is mainly a function of their potential effect on the value of the company (and applicable laws regarding compensation). Again, you might think it’s irrational, but explain that to the investors paying them — or better yet start or invest in your own CEO-privation companies and triumph over all those inefficient companies paying their CEOs too much.
A more interesting question is why so many of the richest people in the world are now entertainers and computer programmers.
You read a lot of Rand, then?
Many people in this thread have differing political or methodological frameworks, but are using them to make isolated concrete points that other people can agree or disagree with. But you are making sweeping claims we are apparently supposed to take on faith. I can think of no way to explain my disagreement without getting into a Big Conversation that I, as yet, have no reason to believe you would be able to help me with. So I’ll just second the sentiment of Ilya’s comment about your faith.
I read a lot of Friedman, Hayek and Feynman, probably the most important thinkers of the latter half of the twentieth century.
Jack, I gave some concrete examples in support my sweeping points, and received none in response. You and Ilya seem to be making the faith-based arguments here, since there are not in fact a lot of examples of hyper-rich elites who didn’t make a lot of other people better off, nor has any plausible mechanism been advanced as to how this might happen.
The issue is in the framing. Here are some contentious framings you have employed: 1) you characterized the inequality Ilya Shpitser hypothesized about as stealing; 2) you defined creating and taking resources as exclusive categories; 3) you defined theft and making people better off in voluntary exchanges as exclusive categories; 4) you characterised [something? actual markets? most markets?] as free markets; 5) you wrote off an above claim about bounded rationality by characterizing people as utility-maximizers who “therefore” tend to only make decisions that increase their subjective utility; 6) you have characterised what governments do as coercion in some sense that most interactions are not (this charcterisation is more from other threads here); and, 7) you have throughout taken the individual as the unit of analysis. My issue is not that any of these framings are “wrong”, just that they all assume so much and lead in the same direction. So I feel like to have a conversation with you about these issues we’d have to drill all the way down to discussing the terms of the discussion, the effects those terms have and why we would use some framings and not others.
CEOs are a terrible example, both because their income is negligible, both as a proportion of GDP and a proportion of their company’s income; and because their income growth is quite explicable as a function of increasing company size. A much better example would be the finance sector, which is a much larger portion of GDP today than in 1970.
Why CEOs and not everyone else?
There were large companies in the past, why did CEO compensation spike in the last 20 years, say?
Why did CEO compensation spike even in moderate-sized companies?
I certainly agree there are much larger questions here than CEOs specifically. But it would be useful to have good models for why some folks can successfully coordinate to outpace: company valuation, compensation of others, etc. when arguing for their own compensation.
CEO pay is much higher because the GDP produced by the 1000 largest companies is much larger than in the past, but they still only have 1000 CEOs. The definition of “medium-sized” could vary, but the same is true for the top 10,000.
Strictly speaking most CEOs actually don’t make very much, because there are vastly more CEOs of small LLCs than of large C corps.
I don’t think there any simple explanations, or any simple solutions. In other words, these effects have multiple causes, and there is no easy way to fix things.
On the other hand, it is not difficult to see contributing factors, and suggest moves that, if made, would reduce costs. It is just that we are unwilling to make those moves, possibly for good reasons.
Consider health care and education, where this effect is plausibly the strongest.
Health care: suppose health insurance was permanently banned. Within a very short time, healthcare costs would go dramatically down. This is because no one would get paid except sums of money that people actually have. So they would be forced to set prices at a level that people could pay, which is far lower than the sums that actually get paid by insurance. So there you have an intervention which would reduce costs, but no one is willing to make that intervention, plausibly for very good reasons.
The nature of that intervention suggests one of the contributing factors to the problem. Insurance appears to have the ability to pay an unlimited sum, and people appear to have an unlimited amount of desire for healthcare. Given such a situation, supply and demand would lead to an unlimited price. Of course this effect is limited in reality because both are false: insurance can pay more, but not an unlimited sum, and people’s desire for healthcare is not really unlimited. But these things are approximately true at least to the extent that as society gets richer [i.e. the one who is actually paying rather than the sick person], healthcare costs will rise for these reasons.
The same arguments can be made in the case of education, with the proposed intervention being “permanently ban student loans.”
Organizations also often do in-house software development/maintenance/support, which we didn’t need to do before, is becoming more popular, and can cost absurd amounts of money.
Excellent discussion. Many of the proposed explanations in the comments seem like plausible factors; however, much of that discussion doesn’t address why those factors would be unique to the US.
To me… to me it sort of makes intuitive sense.
It feels like the natural way of complex systems.
I’ve worked in small, medium and large companies/organizations and young/old organizations/departments.
And there’s a pattern. It’s a Molochian in that most of the people involved can see the problems around them and there’s logical reasons for each contributing thing but together they become a set of concrete shoes for the organisation and there’s no easy solution.
Many programmers know what it’s like with big complex old systems.
In a group with a few people writing new code vast amounts of work per person can get done in extremely short time-spans.
In a big old company things move at a glacial pace. If it’s a good company though they at least move and get there eventually.
In the latter you find yourself tied into mostly pointless meetings, anything of substance you need to do ends up needing to be run by 20 different people.
Technology doesn’t help with this. It can help short term but mostly technology merely facilitates this.
There’s an accompanying growth of paperwork. This isn’t the mean old government or regulation(at least not always). In a private firm working for private firms there were often massive sets of required paperwork.
Sometimes some of this paperwork becomes automated. Forms you previously needed to fill out get automatically filled in by some program so you don’t need to do it.
But I said before, technology doesn’t actually help. It just allows you to ignore the problem for longer.
And each individual item is there for what feels like a good reason. Once someone killed someone and the decision afterwards was to add item 101 to the checklist to make sure there was nobody in the dangerous zone at that step. If you remove it it’s entirely foreseeable that you could end up in court with someone saying “and the safety measure that could have saved little billies life was removed to save a few pennies and seconds by you!”
Technology does not help though because this has always been a problem that expands to meet the capabilities of the system, until it forces people to deal with it. Automate 10K pieces of paperwork and it just means the system can now cope with 10K pieces plus whatever else you can cope with before someone needs to take the hit and change things.
It’s not just safety measures, it’s not just liability.
It’s everything. If technology allows you to keep fulfilling a legacy requirement to talk to some other system then you will. If technology takes up some of the strain it allows more load to be placed on the system.
So it’s not surprising to me that those schools cope better. Their employees are probably happier too since they don’t spend their time on what amounts to system maintenance. Give it 40 years though and they’ll probably look a lot like the other schools unless you’ve got some very unusual individuals in charge.
As far as I can tell no organization or organization type is immune to the phenomenon.
Some are a little more resistant. organizations which explicitly resist complexity and making their systems more complex at every step: the decay is slowed but not halted.
it’s also a tad cancerous. Once you have 10K policies and have people who’s jobs are to write policies and policies on dealing with policies the additional cost of 1 new one is very small and there’s always a “good reason”.
It’s at every level, it’s not just paperwork, it’s also in the interactions between employees, it’s in the interactions with clients.
To be clear:
Cutting out just one area gains you very little. Cut out every government regulation and you’ll just find yourself in the same place in 10 years.
It’s not just regulations, it’s also organizational memory and organizational trauma in the form of policies to deal with things which have harmed the organization in the past.
Add to that: in a city like new york for a big project you have to interact with thousands of legacy systems of various sizes to some degree.
Some very very old institutions have structures to deal with it, often they take the form of modularity. Organization structures that allow the creation and destruction of entire departments or other modules as a whole, wiping the slate clean.
The principles of system design and system architecture applies to everything, not just software.
The US has been rich with strong institutions for a long time. Comparative newcomers don’t need to deal with hundreds of years of legacy systems in countries where everything was burned to the ground 50 years ago.
Libertarianism does not solve the problem, it just kicks the can down the road a little unless there’s massive churn of every system and meta-system as well with the incentives aligning perfectly.
The difference is that if you grow increasingly inefficient in the private sector, someone else will eventually come along and supply a superior value proposition. There are limits to how much bloat can accumulate. There is no such mechanism to self-correct the areas Scott is discussing.
It allows an organisation to survive a little bit longer but a multi billion dollar multinational can survive a great deal of it and can be weighed down massively while still having the clout to stamp on competitors.
Vote Reapers 2020! It’s the responsible choice to avoid sclerotization!
I work in a physics lab, and we use a lot of scientific equipment. Scientific equipment is expensive. Sometimes, this is because it is expensive to make, but sometimes, it’s just because people are used to paying a lot of money for it, and are suspicious when it is too cheap.
I recently started using a $25 consumer electronic device in place of something that would normally be a $2000 piece of specialized equipment. It occurred to me that I could start making a version of the $25 device, with some minor tweaks to make it more user friendly for technical work, and charge as much as I want, so long as it’s cheaper than $2000, especially because the sort of lab that would want it would need to have a big budget in order to operate at all. Actually, I couldn’t sell it for however much I want, because if I tried to sell it for $25, nobody would buy it because they wouldn’t think it was “real” scientific equipment. I’m not even sure how much would be enough. $200? $500?
I don’t know if this can account for the $700 bag of saline or not. I imagine hospitals employing people who purchase saline, and I don’t imagine these people being as susceptible to the real-medical-equipment-is-expensive bias as a graduate student in a physics lab, but I can imagine there being a strong enough desire to avoid lawsuits to bias someone toward the option that seems more legitimate, and appearing legitimate usually involves price.
If someone tried to sell “made in Thailand costs 50c” bags of saline to a hospital, I wouldn’t be surprised that suspicions about quality control would be triggered, but you would need one heck of a “more expensive is better” mindset to pay $700 for a bag of saline. I could see paying $10, I could even see bumping it up to $50 for extra-special guaranteed not to trigger any weird allergies saline, but not that kind of money. There has to be a reason behind that price, if it was a genuine one.
As I understand it, in the US our hospitals have the following problems: (1) as Scott mentions, people who can’t pay show up with emergencies and it’s illegal to deny them care, and (2) most insurance companies aggressively negotiate down the prices of the services they cover, though some are more aggressive about it than others, particularly Medicare and Medicaid. So the hospital may only pay $1 per bag of saline to its supplier, and you’d think $5 or so would be a reasonable price to just charge everyone, but if they bill the patient $700 then they’ll get various insurance companies paying $2-$50 or more per bag and the occasional non-insured person who isn’t judgement-proof like Scott’s “frequent flyers” and winds up having to pay more or less the full $700. And this is what they feel like they have to do to make ends meet.
Yes, I suppose distrust of low-cost medical supplies can only explain how much the hospital pays a supplier, not how much patients pay the hospital. It sounds like “IV therapy” is seen as a convenient place to charge patients or their insurance company to offset other costs to the hospital.
I haven’t gone through all the comments yet, but how hard would it really be to look at a bunch of budgets from public schools ranging from like 1965 to the present and actually just physically look at where the money used to go and where it goes now? Has anyone does this research? It seems like a really obvious first step.
That’s been done – the best/easiest data source is here:
Employment (50% of staff are non-teachers)
Thanks, this is great!
My thoughts on this, as pasted (with minimal edits) from a Medium post I just wrote; https://medium.com/@davidmanheim/chasing-superior-good-syndrome-vs-baumols-or-scott-s-cost-disease-40327ae87b45
I think Scott misses an important dynamic that I’d like to lay out.
Above, he lists of eight potential answers, each of which he partly dismisses. Cost increases are really happening, and markets mostly work, so it’s not simply a market failure. Government inefficiency and overregulation doesn’t really explain large parts of the problem, nor do fear of lawsuits. Risk tolerance has decreased, but that seems not to have been the sole issue. Cost shirking by some people might increase costs a bit, but that isn’t the whole picture. Finally, not in the list explicitly, but implicitly explored when Scott refers to “politics,” is Moloch.
I think it’s a bit strange to end the long list of partial answers, which plausibly explain the vast majority of the issue with “What’s happening? I don’t know and I find it really scary.” But I think there is another dynamic that’s being ignored — and I would be surprised if an economist ignored it, but I’ll blame Scott’s eclectic ad-hoc education for why he doesn’t discuss the elephant in the room — Superior goods.
For those who don’t remember their Economics classes, imagine a guy who makes $40,000/year and eats chicken for dinner 3 nights a week. He gets a huge 50% raise, to $60,000/year, and suddenly has extra money to spend — his disposable income probably tripled or quadrupled. Before the hedonic treadmill kicks in, and he decides to waste all the money on higher rent and nicer cars, he changes his diet. But he won’t start eating chicken 10 times a week — he’ll start eating steak. When people get more money, they replace cheap “inferior” goods with expensive “superior” goods. And steak is a superior good.
But how many times a week will people eat steak? Two? Five? Americans as a whole got really rich in the 1940s and 1950s, and needed someplace to start spending their newfound wealth. What do people spend extra money on? Entertainment is now pretty cheap, and there are only so many nights a week you see a movie, and only so many $20/month MMORPGs you’re going to pay for. You aren’t going to pay 5 times as much for a slightly better video game or movie — and although you might pay double for 3D-Imax, there’s not much room for growth in that 5%.
The Atlantic had a piece on this several years ago; https://www.theatlantic.com/business/archive/2012/04/how-america-spends-money-100-years-in-the-life-of-the-family-budget/255475/
Food, including rising steak consumption, decreased to a negligible part of people’s budgets, as housing started rising. The other big change the article discusses is that after 1950 or so, everyone got cars, and commuted from their more expensive suburban houses — which is effectively an implicit increase in housing cost.
And at some point, bigger houses and nicer cars begin to saturate; a Tesla is nicer than my Hyundai, and I’d love one, but not enough to upgrade for 3x the cost. I know how much better a Tesla is — I’ve seen them.
Limitless Demand, Invisible Supply
There are only a few things that we have a limitless demand for, but very limited ability to judge the impact of our spending. What are they?
I think this is one big missing piece of the puzzle; in both healthcare and education, we want improvements, and they are worth a ton, but we can’t figure out how much the marginal spending improves things. So we pour money into these sectors.
Scott thinks this means that teachers’ and doctors’ wages should rise, but they don’t. I think it’s obvious why; they supply isn’t very limited. And the marginal impact of two teachers versus one, or a team of doctors versus one, isn’t huge. (Class size matters, but we have tons of teachers — with no shortage in sight, there is no price pressure.)
What sucks up the increased money? Dollars, both public and private, chasing hard to find benefits.
I’d spend money to improve my health, both mental and physical, but how? Extra medical diagnostics to catch problems, pricier but marginally more effective drugs, chiropractors, probably useless supplements — all are exploding in popularity. How much do they improve health? I don’t really know — not much, but I’d probably try something if it might be useful.
I’m spending a ton of money on preschool for my kids. Why? Because it helps, according to the studies. How much better is the $15,000/year daycare versus the $8,000 a year program a friend of mine runs in her house? Unclear, but I’m certainly not the only one spending big bucks. Why spend less, if education is the most superior good around?
How much better is Harvard than a subsidized in-state school, or four years of that school versus 2 years of cheap community college before transferring in? The studies seem to suggest that most of the benefit is really because the kids who get into the better schools. And Scott knows that this is happening. (Linking to previous posts here.)
We pour money into schools and medicine in order to improve things, but where does the money go? Into efforts to improve things, of course. But I’ve argued at length before that bureaucracy is bad at incentivizing things, especially when goals are unclear. So the money goes to sinkholes like more bureaucrats and clever manipulation of the metrics that are used to allocate the money.
As long as we’re incentivized to improve things that we’re unsure how to improve, the incentives to pour money into them unwisely will continue, and costs will rise. That’s not the entire answer, but it’s a central dynamic that leads to many of the things Scott is talking about — so hopefully that reduces Scott’s fears a bit.