I.
Tyler Cowen writes about cost disease. I’d previously heard the term used to refer only to a specific theory of why costs are increasing, involving labor becoming more efficient in some areas than others. Cowen seems to use it indiscriminately to refer to increasing costs in general – which I guess is fine, goodness knows we need a word for that.
Cowen assumes his readers already understand that cost disease exists. I don’t know if this is true. My impression is that most people still don’t know about cost disease, or don’t realize the extent of it. So I thought I would make the case for the cost disease in the sectors Tyler mentions – health care and education – plus a couple more.
First let’s look at primary education:
There was some argument about the style of this graph, but as per Politifact the basic claim is true. Per student spending has increased about 2.5x in the past forty years even after adjusting for inflation.
At the same time, test scores have stayed relatively stagnant. You can see the full numbers here, but in short, high school students’ reading scores went from 285 in 1971 to 287 today – a difference of 0.7%.
There is some heterogenity across races – white students’ test scores increased 1.4% and minority students’ scores by about 20%. But it is hard to credit school spending for the minority students’ improvement, which occurred almost entirely during the period from 1975-1985. School spending has been on exactly the same trajectory before and after that time, and in white and minority areas, suggesting that there was something specific about that decade which improved minority (but not white) scores. Most likely this was the general improvement in minorities’ conditions around that time, giving them better nutrition and a more stable family life. It’s hard to construct a narrative where it was school spending that did it – and even if it did, note that the majority of the increase in school spending happened from 1985 on, and demonstrably helped neither whites nor minorities.
I discuss this phenomenon more here and here, but the summary is: no, it’s not just because of special ed; no, it’s not just a factor of how you measure test scores; no, there’s not a “ceiling effect”. Costs really did more-or-less double without any concomitant increase in measurable quality.
So, imagine you’re a poor person. White, minority, whatever. Which would you prefer? Sending your child to a 2016 school? Or sending your child to a 1975 school, and getting a check for $5,000 every year?
I’m proposing that choice because as far as I can tell that is the stakes here. 2016 schools have whatever tiny test score advantage they have over 1975 schools, and cost $5000/year more, inflation adjusted. That $5000 comes out of the pocket of somebody – either taxpayers, or other people who could be helped by government programs.
Second, college is even worse:
Note this is not adjusted for inflation; see link below for adjusted figures
Inflation-adjusted cost of a university education was something like $2000/year in 1980. Now it’s closer to $20,000/year. No, it’s not because of decreased government funding, and there are similar trajectories for public and private schools.
I don’t know if there’s an equivalent of “test scores” measuring how well colleges perform, so just use your best judgment. Do you think that modern colleges provide $18,000/year greater value than colleges did in your parents’ day? Would you rather graduate from a modern college, or graduate from a college more like the one your parents went to, plus get a check for $72,000?
(or, more realistically, have $72,000 less in student loans to pay off)
Was your parents’ college even noticeably worse than yours? My parents sometimes talk about their college experience, and it seems to have had all the relevant features of a college experience. Clubs. Classes. Professors. Roommates. I might have gotten something extra for my $72,000, but it’s hard to see what it was.
Third, health care. The graph is starting to look disappointingly familiar:
The cost of health care has about quintupled since 1970. It’s actually been rising since earlier than that, but I can’t find a good graph; it looks like it would have been about $1200 in today’s dollars in 1960, for an increase of about 800% in those fifty years.
This has had the expected effects. The average 1960 worker spent ten days’ worth of their yearly paycheck on health insurance; the average modern worker spends sixty days’ worth of it, a sixth of their entire earnings.
Or not.
This time I can’t say with 100% certainty that all this extra spending has been for nothing. Life expectancy has gone way up since 1960:
Extra bonus conclusion: the Spanish flu was really bad
But a lot of people think that life expectancy depends on other things a lot more than healthcare spending. Sanitation, nutrition, quitting smoking, plus advances in health technology that don’t involve spending more money. ACE inhibitors (invented in 1975) are great and probably increased lifespan a lot, but they cost $20 for a year’s supply and replaced older drugs that cost about the same amount.
In terms of calculating how much lifespan gain healthcare spending has produced, we have a couple of options. Start with by country:
Countries like South Korea and Israel have about the same life expectancy as the US but pay about 25% of what we do. Some people use this to prove the superiority of centralized government health systems, although Random Critical Analysis has an alternative perspective. In any case, it seems very possible to get the same improving life expectancies as the US without octupling health care spending.
The Netherlands increased their health budget by a lot around 2000, sparking a bunch of studies on whether that increased life expectancy or not. There’s a good meta-analysis here, which lists six studies trying to calculate how much of the change in life expectancy was due to the large increases in health spending during this period. There’s a broad range of estimates: 0.3%, 1.8%, 8.0%, 17.2%, 22.1%, 27.5% (I’m taking their numbers for men; the numbers for women are pretty similar). They also mention two studies that they did not officially include; one finding 0% effect and one finding 50% effect (I’m not sure why these studies weren’t included). They add:
In none of these studies is the issue of reverse causality addressed; sometimes it is not even mentioned. This implies that the effect of health care spending on mortality may be overestimated.
They say:
Based on our review of empirical studies, we conclude that it is likely that increased health care spending has contributed to the recent increase in life expectancy in the Netherlands. Applying the estimates form published studies to the observed increase in health care spending in the Netherlands between 2000 and 2010 [of 40%] would imply that 0.3% to almost 50% of the increase in life expectancy may have been caused by increasing health care spending. An important reason for the wide range in such estimates is that they all include methodological problems highlighted in this paper. However, this wide range inicates that the counterfactual study by Meerding et al, which argued that 50% of the increase in life expectancy in the Netherlands since the 1950s can be attributed to medical care, can probably be interpreted as an upper bound.
It’s going to be completely irresponsible to try to apply this to the increase in health spending in the US over the past 50 years, since this is probably different at every margin and the US is not the Netherlands and the 1950s are not the 2010s. But if we irresponsibly take their median estimate and apply it to the current question, we get that increasing health spending in the US has been worth about one extra year of life expectancy.
This study attempts to directly estimate a %GDP health spending to life expectancy conversion, and says that an increase of 1% GDP corresponds to an increase of 0.05 years life expectancy. That would suggest a slightly different number of 0.65 years life expectancy gained by healthcare spending since 1960)
If these numbers seem absurdly low, remember all of those controlled experiments where giving people insurance doesn’t seem to make them much healthier in any meaningful way.
Or instead of slogging through the statistics, we can just ask the same question as before. Do you think the average poor or middle-class person would rather:
a) Get modern health care
b) Get the same amount of health care as their parents’ generation, but with modern technology like ACE inhibitors, and also earn $8000 extra a year
Fourth, we se similar effects in infrastructure. The first New York City subway opened around 1900. Various sources list lengths from 10 to 20 miles and costs from $30 million to $60 million dollars – I think my sources are capturing it at different stages of construction with different numbers of extensions. In any case, it suggests costs of between $1.5 million to $6 million dollars/mile = $1-4 million per kilometer. That looks like it’s about the inflation-adjusted equivalent of $100 million/kilometer today, though I’m very uncertain about that estimate. In contrast, Vox notes that a new New York subway line being opened this year costs about $2.2 billion per kilometer, suggesting a cost increase of twenty times – although I’m very uncertain about this estimate.
Things become clearer when you compare them country-by-country. The same Vox article notes that Paris, Berlin, and Copenhagen subways cost about $250 million per kilometer, almost 90% less. Yet even those European subways are overpriced compared to Korea, where a kilometer of subway in Seoul costs $40 million/km (another Korean subway project cost $80 million/km). This is a difference of 50x between Seoul and New York for apparently comparable services. It suggests that the 1900s New York estimate above may have been roughly accurate if their efficiency was roughly in line with that of modern Europe and Korea.
Fifth, housing (source:
Most of the important commentary on this graph has already been said, but I would add that optimistic takes like this one by the American Enterprise Institute are missing some of the dynamic. Yes, homes are bigger than they used to be, but part of that is zoning laws which make it easier to get big houses than small houses. There are a lot of people who would prefer to have a smaller house but don’t. When I first moved to Michigan, I lived alone in a three bedroom house because there were no good one-bedroom houses available near my workplace and all of the apartments were loud and crime-y.
Or, once again, just ask yourself: do you think most poor and middle class people would rather:
1. Rent a modern house/apartment
2. Rent the sort of house/apartment their parents had, for half the cost
II.
So, to summarize: in the past fifty years, education costs have doubled, college costs have dectupled, health insurance costs have dectupled, subway costs have at least dectupled, and housing costs have increased by about fifty percent. US health care costs about four times as much as equivalent health care in other First World countries; US subways cost about eight times as much as equivalent subways in other First World countries.
I worry that people don’t appreciate how weird this is. I didn’t appreciate it for a long time. I guess I just figured that Grandpa used to talk about how back in his day movie tickets only cost a nickel; that was just the way of the world. But all of the numbers above are inflation-adjusted. These things have dectupled in cost even after you adjust for movies costing a nickel in Grandpa’s day. They have really, genuinely dectupled in cost, no economic trickery involved.
And this is especially strange because we expect that improving technology and globalization ought to cut costs. In 1983, the first mobile phone cost $4,000 – about $10,000 in today’s dollars. It was also a gigantic piece of crap. Today you can get a much better phone for $100. This is the right and proper way of the universe. It’s why we fund scientists, and pay businesspeople the big bucks.
But things like college and health care have still had their prices dectuple. Patients can now schedule their appointments online; doctors can send prescriptions through the fax, pharmacies can keep track of medication histories on centralized computer systems that interface with the cloud, nurses get automatic reminders when they’re giving two drugs with a potential interaction, insurance companies accept payment through credit cards – and all of this costs ten times as much as it did in the days of punch cards and secretaries who did calculations by hand.
It’s actually even worse than this, because we take so many opportunities to save money that were unavailable in past generations. Underpaid foreign nurses immigrate to America and work for a song. Doctors’ notes are sent to India overnight where they’re transcribed by sweatshop-style labor for pennies an hour. Medical equipment gets manufactured in goodness-only-knows which obscure Third World country. And it still costs ten times as much as when this was all made in the USA – and that back when minimum wages were proportionally higher than today.
And it’s actually even worse than this. A lot of these services have decreased in quality, presumably as an attempt to cut costs even further. Doctors used to make house calls; even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office. This study notes that for women who give birth in the hospital, “the standard length of stay was 8 to 14 days in the 1950s but declined to less than 2 days in the mid-1990s”. The doctors I talk to say this isn’t because modern women are healthier, it’s because they kick them out as soon as it’s safe to free up beds for the next person. Historic records of hospital care generally describe leisurely convalescence periods and making sure somebody felt absolutely well before letting them go; this seems bizarre to anyone who has participated in a modern hospital, where the mantra is to kick people out as soon as they’re “stable” ie not in acute crisis.
If we had to provide the same quality of service as we did in 1960, and without the gains from modern technology and globalization, who even knows how many times more health care would cost? Fifty times more? A hundred times more?
And the same is true for colleges and houses and subways and so on.
III.
The existing literature on cost disease focuses on the Baumol effect. Suppose in some underdeveloped economy, people can choose either to work in a factory or join an orchestra, and the salaries of factory workers and orchestra musicians reflect relative supply and demand and profit in those industries. Then the economy undergoes a technological revolution, and factories can produce ten times as many goods. Some of the increased productivity trickles down to factory workers, and they earn more money. Would-be musicians leave the orchestras behind to go work in the higher-paying factories, and the orchestras have to raise their prices if they want to be assured enough musicians. So tech improvements in the factory sectory raise prices in the orchestra sector.
We could tell a story like this to explain rising costs in education, health care, etc. If technology increases productivity for skilled laborers in other industries, then less susceptible industries might end up footing the bill since they have to pay their workers more.
There’s only one problem: health care and education aren’t paying their workers more; in fact, quite the opposite.
Here are teacher salaries over time (source):
Teacher salaries are relatively flat adjusting for inflation. But salaries for other jobs are increasing modestly relative to inflation. So teacher salaries relative to other occupations’ salaries are actually declining.
Here’s a similar graph for professors (source):
Professor salaries are going up a little, but again, they’re probably losing position relative to the average occupation. Also, note that although the average salary of each type of faculty is stable or increasing, the average salary of all faculty is going down. No mystery here – colleges are doing everything they can to switch from tenured professors to adjuncts, who complain of being overworked and abused while making about the same amount as a Starbucks barista.
This seems to me a lot like the case of the hospitals cutting care for new mothers. The price of the service dectuples, yet at the same time the service has to sacrifice quality in order to control costs.
And speaking of hospitals, here’s the graph for nurses (source):
Female nurses’ salaries went from about $55,000 in 1988 to $63,000 in 2013. This is probably around the average wage increase during that time. Also, some of this reflects changes in education: in the 1980s only 40% of nurses had a degree; by 2010, about 80% did.
And for doctors (source)
Stable again! Except that a lot of doctors’ salaries now go to paying off their medical school debt, which has been ballooning like everything eles.
I don’t have a similar graph for subway workers, but come on. The overall pictures is that health care and education costs have managed to increase by ten times without a single cent of the gains going to teachers, doctors, or nurses. Indeed these professions seem to have lost ground salary-wise relative to others.
I also want to add some anecdote to these hard facts. My father is a doctor and my mother is a teacher, so I got to hear a lot about how these professions have changed over the past generation. It seems at least a little like the adjunct story, although without the clearly defined “professor vs. adjunct” dichotomy that makes it so easy to talk about. Doctors are really, really, really unhappy. When I went to medical school, some of my professors would tell me outright that they couldn’t believe anyone would still go into medicine with all of the new stresses and demands placed on doctors. This doesn’t seem to be limited to one medical school. Wall Street Journal: Why Doctors Are Sick Of Their Profession – “American physicians are increasingly unhappy with their once-vaunted profession, and that malaise is bad for their patients”. The Daily Beast: How Being A Doctor Became The Most Miserable Profession – “Being a doctor has become a miserable and humiliating undertaking. Indeed, many doctors feel that America has declared war on physicians”. Forbes: Why Are Doctors So Unhappy? – “Doctors have become like everyone else: insecure, discontent and scared about the future.” Vox: Only Six Percent Of Doctors Are Happy With Their Jobs. Al Jazeera America: Here’s Why Nine Out Of Ten Doctors Wouldn’t Recommend Medicine As A Profession. Read these articles and they all say the same thing that all the doctors I know say – medicine used to be a well-respected, enjoyable profession where you could give patients good care and feel self-actualized. Now it kind of sucks.
Meanwhile, I also see articles like this piece from NPR saying teachers are experiencing historic stress levels and up to 50% say their job “isn’t worth it”. Teacher job satisfaction is at historic lows. And the veteran teachers I know say the same thing as the veteran doctors I know – their jobs used to be enjoyable and make them feel like they were making a difference; now they feel overworked, unappreciated, and trapped in mountains of paperwork.
It might make sense for these fields to become more expensive if their employees’ salaries were increasing. And it might make sense for salaries to stay the same if employees instead benefitted from lower workloads and better working conditions. But neither of these are happening.
IV.
So what’s going on? Why are costs increasing so dramatically? Some possible answers:
First, can we dismiss all of this as an illusion? Maybe adjusting for inflation is harder than I think. Inflation is an average, so some things have to have higher-than-average inflation; maybe it’s education, health care, etc. Or maybe my sources have the wrong statistics.
But I don’t think this is true. The last time I talked about this problem, someone mentioned they’re running a private school which does just as well as public schools but costs only $3000/student/year, a fourth of the usual rate. Marginal Revolution notes that India has a private health system that delivers the same quality of care as its public system for a quarter of the cost. Whenever the same drug is provided by the official US health system and some kind of grey market supplement sort of thing, the grey market supplement costs between a fifth and a tenth as much; for example, Google’s first hit for Deplin®, official prescription L-methylfolate, costs $175 for a month’s supply; unregulated L-methylfolate supplement delivers the same dose for about $30. And this isn’t even mentioning things like the $1 bag of saline that costs $700 at hospitals. Since it seems like it’s not too hard to do things for a fraction of what we currently do things for, probably we should be less reluctant to believe that the cost of everything is really inflated.
Second, might markets just not work? I know this is kind of an extreme question to ask in a post on economics, but maybe nobody knows what they’re doing in a lot of these fields and people can just increase costs and not suffer any decreased demand because of it. Suppose that people proved beyond a shadow of a doubt that Khan Academy could teach you just as much as a normal college education, but for free. People would still ask questions like – will employers accept my Khan Academy degree? Will it look good on a resume? Will people make fun of me for it? The same is true of community colleges, second-tier colleges, for-profit colleges, et cetera. I got offered a free scholarship to a mediocre state college, and I turned it down on the grounds that I knew nothing about anything and maybe years from now I would be locked out of some sort of Exciting Opportunity because my college wasn’t prestigious enough. Assuming everyone thinks like this, can colleges just charge whatever they want?
Likewise, my workplace offered me three different health insurance plans, and I chose the middle-expensiveness one, on the grounds that I had no idea how health insurance worked but maybe if I bought the cheap one I’d get sick and regret my choice, and maybe if I bought the expensive one I wouldn’t be sick and regret my choice. I am a doctor, my employer is a hospital, and the health insurance was for treatment in my own health system. The moral of the story is that I am an idiot. The second moral of the story is that people probably are not super-informed health care consumers.
This can’t be pure price-gouging, since corporate profits haven’t increased nearly enough to be where all the money is going. But a while ago a commenter linked me to the Delta Cost Project, which scrutinizes the exact causes of increasing college tuition. Some of it is the administrative bloat that you would expect. But a lot of it is fun “student life” types of activities like clubs, festivals, and paying Milo Yiannopoulos to speak and then cleaning up after the ensuing riots. These sorts of things improve the student experience, but I’m not sure that the average student would rather go to an expensive college with clubs/festivals/Milo than a cheap college without them. More important, it doesn’t really seem like the average student is offered this choice.
This kind of suggests a picture where colleges expect people will pay whatever price they set, so they set a very high price and then use the money for cool things and increasing their own prestige. Or maybe clubs/festivals/Milo become such a signal of prestige that students avoid colleges that don’t comply since they worry their degrees won’t be respected? Some people have pointed out that hospitals have switched from many-people-all-in-a-big-ward to private rooms. Once again, nobody seems to have been offered the choice between expensive hospitals with private rooms versus cheap hospitals with roommates. It’s almost as if industries have their own reasons for switching to more-bells-and-whistles services that people don’t necessarily want, and consumers just go along with it because for some reason they’re not exercising choice the same as they would in other markets.
(this article on the Oklahoma City Surgery Center might be about a partial corrective for this kind of thing)
Third, can we attribute this to the inefficiency of government relative to private industry? I don’t think so. The government handles most primary education and subways, and has its hand in health care. But we know that for-profit hospitals aren’t much cheaper than government hospitals, and that private schools usually aren’t much cheaper (and are sometimes more expensive) than government schools. And private colleges cost more than government-funded ones.
Fourth, can we attribute it to indirect government intervention through regulation, which public and private companies alike must deal with? This seems to be at least part of the story in health care, given how much money you can save by grey-market practices that avoid the FDA. It’s harder to apply it to colleges, though some people have pointed out regulations like Title IX that affect the educational sector.
One factor that seems to speak out against this is that starting with Reagan in 1980, and picking up steam with Gingrich in 1994, we got an increasing presence of Republicans in government who declared war on overregulation – but the cost disease proceeded unabated. This is suspicious, but in fairness to the Republicans, they did sort of fail miserably at deregulating things. “The literal number of pages in the regulatory code” is kind of a blunt instrument, but it doesn’t exactly inspire confidence in the Republicans’ deregulation efforts:
Here’s a more interesting (and more fun) argument against regulations being to blame: what about pet health care? Veterinary care is much less regulated than human health care, yet its cost is rising as fast (or faster) than that of the human medical system (popular article, study). I’m not sure what to make of this.
Fifth, might the increased regulatory complexity happen not through literal regulations, but through fear of lawsuits? That is, might institutions add extra layers of administration and expense not because they’re forced to, but because they fear being sued if they don’t and then something goes wrong?
I see this all the time in medicine. A patient goes to the hospital with a heart attack. While he’s recovering, he tells his doctor that he’s really upset about all of this. Any normal person would say “You had a heart attack, of course you’re upset, get over it.” But if his doctor says this, and then a year later he commits suicide for some unrelated reason, his family can sue the doctor for “not picking up the warning signs” and win several million dollars. So now the doctor consults a psychiatrist, who does an hour-long evaluation, charges the insurance company $500, and determines using her immense clinical expertise that the patient is upset because he just had a heart attack.
Those outside the field have no idea how much of medicine is built on this principle. People often say that the importance of lawsuits to medical cost increases is overrated because malpractice insurance doesn’t cost that much, but the situation above would never look lawsuit-related; the whole thing only works because everyone involved documents it as well-justified psychiatric consult to investigate depression. Apparently some studies suggest this isn’t happening, but all they do is survey doctors, and with all due respect all the doctors I know say the opposite.
This has nothing to do with government regulations (except insofar as these make lawsuits easier or harder), but it sure can drive cost increases, and it might apply to fields outside medicine as well.
Sixth, might we have changed our level of risk tolerance? That is, might increased caution be due not purely to lawsuitphobia, but to really caring more about whether or not people are protected? I read stuff every so often about how playgrounds are becoming obsolete because nobody wants to let kids run around unsupervised on something with sharp edges. Suppose that one in 10,000 kids get a horrible playground-related injury. Is it worth making playgrounds cost twice as much and be half as fun in order to decrease that number to one in 100,000? This isn’t a rhetorical question; I think different people can have legitimately different opinions here (though there are probably some utilitarian things we can do to improve them).
To bring back the lawsuit point, some of this probably relates to a difference between personal versus institutional risk tolerance. Every so often, an elderly person getting up to walk to the bathroom will fall and break their hip. This is a fact of life, and elderly people deal with it every day. Most elderly people I know don’t spend thousands of dollars fall-proofing the route from their bed to their bathroom, or hiring people to watch them at every moment to make sure they don’t fall, or buy a bedside commode to make bathroom-related falls impossible. This suggests a revealed preference that elderly people are willing to tolerate a certain fall probability in order to save money and convenience. Hospitals, which face huge lawsuits if any elderly person falls on the premises, are not willing to tolerate that probability. They put rails on elderly people’s beds, place alarms on them that will go off if the elderly person tries to leave the bed without permission, and hire patient care assistants who among other things go around carefully holding elderly people upright as they walk to the bathroom (I assume this job will soon require at least a master’s degree). As more things become institutionalized and the level of acceptable institutional risk tolerance becomes lower, this could shift the cost-risk tradeoff even if there isn’t a population-level trend towards more risk-aversion.
Seventh, might things cost more for the people who pay because so many people don’t pay? This is somewhat true of colleges, where an increasing number of people are getting in on scholarships funded by the tuition of non-scholarship students. I haven’t been able to find great statistics on this, but one argument against: couldn’t a college just not fund scholarships, and offer much lower prices to its paying students? I get that scholarships are good and altruistic, but it would be surprising if every single college thought of its role as an altruistic institution, and cared about it more than they cared about providing the same service at a better price. I guess this is related to my confusion about why more people don’t open up colleges. Maybe this is the “smart people are rightly too scared and confused to go to for-profit colleges, and there’s not enough ability to discriminate between the good and the bad ones to make it worthwhile to found a good one” thing again.
This also applies in health care. Our hospital (and every other hospital in the country) has some “frequent flier” patients who overdose on meth at least once a week. They comes in, get treated for their meth overdose (we can’t legally turn away emergency cases), get advised to get help for their meth addiction (without the slightest expectation that they will take our advice) and then get discharged. Most of them are poor and have no insurance, but each admission costs a couple of thousand dollars. The cost gets paid by a combination of taxpayers and other hospital patients with good insurance who get big markups on their own bills.
Eighth, might total compensation be increasing even though wages aren’t? There definitely seems to be a pensions crisis, especially in a lot of government work, and it’s possible that some of this is going to pay the pensions of teachers, etc. My understanding is that in general pensions aren’t really increasing much faster than wages, but this might not be true in those specific industries. Also, this might pass the buck to the question of why we need to spend more on pensions now than in the past. I don’t think increasing life expectancy explains all of this, but I might be wrong.
IV.
I mentioned politics briefly above, but they probably deserve more space here. Libertarian-minded people keep talking about how there’s too much red tape and the economy is being throttled. And less libertarian-minded people keep interpreting it as not caring about the poor, or not understanding that government has an important role in a civilized society, or as a “dog whistle” for racism, or whatever. I don’t know why more people don’t just come out and say “LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY, AND WE’RE MOSTLY JUST DESPERATELY FLAILING AROUND LOOKING FOR SOLUTIONS HERE.” State that clearly, and a lot of political debates take on a different light.
For example: some people promote free universal college education, remembering a time when it was easy for middle class people to afford college if they wanted it. Other people oppose the policy, remembering a time when people didn’t depend on government handouts. Both are true! My uncle paid for his tuition at a really good college just by working a pretty easy summer job – not so hard when college cost a tenth of what it did now. The modern conflict between opponents and proponents of free college education is over how to distribute our losses. In the old days, we could combine low taxes with widely available education. Now we can’t, and we have to argue about which value to sacrifice.
Or: some people get upset about teachers’ unions, saying they must be sucking the “dynamism” out of education because of increasing costs. Others people fiercely defend them, saying teachers are underpaid and overworked. Once again, in the context of cost disease, both are obviously true. The taxpayers are just trying to protect their right to get education as cheaply as they used to. The teachers are trying to protect their right to make as much money as they used to. The conflict between the taxpayers and the teachers’ unions is about how to distribute losses; somebody is going to have to be worse off than they were a generation ago, so who should it be?
And the same is true to greater or lesser degrees in the various debates over health care, public housing, et cetera.
Imagine if tomorrow, the price of water dectupled. Suddenly people have to choose between drinking and washing dishes. Activists argue that taking a shower is a basic human right, and grumpy talk show hosts point out that in their day, parents taught their children not to waste water. A coalition promotes laws ensuring government-subsidized free water for poor families; a Fox News investigative report shows that some people receiving water on the government dime are taking long luxurious showers. Everyone gets really angry and there’s lots of talk about basic compassion and personal responsibility and whatever but all of this is secondary to why does water costs ten times what it used to?
I think this is the basic intuition behind so many people, even those who genuinely want to help the poor, are afraid of “tax and spend” policies. In the context of cost disease, these look like industries constantly doubling, tripling, or dectupling their price, and the government saying “Okay, fine,” and increasing taxes however much it costs to pay for whatever they’re demanding now.
If we give everyone free college education, that solves a big social problem. It also locks in a price which is ten times too high for no reason. This isn’t fair to the government, which has to pay ten times more than it should. It’s not fair to the poor people, who have to face the stigma of accepting handouts for something they could easily have afforded themselves if it was at its proper price. And it’s not fair to future generations if colleges take this opportunity to increase the cost by twenty times, and then our children have to subsidize that.
I’m not sure how many people currently opposed to paying for free health care, or free college, or whatever, would be happy to pay for health care that cost less, that was less wasteful and more efficient, and whose price we expected to go down rather than up with every passing year. I expect it would be a lot.
And if it isn’t, who cares? The people who want to help the poor have enough political capital to spend eg $500 billion on Medicaid; if that were to go ten times further, then everyone could get the health care they need without any more political action needed. If some government program found a way to give poor people good health insurance for a few hundred dollars a year, college tuition for about a thousand, and housing for only two-thirds what it costs now, that would be the greatest anti-poverty advance in history. That program is called “having things be as efficient as they were a few decades ago”.
V.
In 1930, economist John Maynard Keynes predicted that his grandchildrens’ generation would have a 15 hour work week. At the time, it made sense. GDP was rising so quickly that anyone who could draw a line on a graph could tell that our generation would be four or five times richer than his. And the average middle-class person in his generation felt like they were doing pretty well and had most of what they needed. Why wouldn’t they decide to take some time off and settle for a lifestyle merely twice as luxurious as Keynes’ own?
Keynes was sort of right. GDP per capita is 4-5x greater today than in his time. Yet we still work forty hour weeks, and some large-but-inconsistently-reported percent of Americans (76? 55? 47?) still live paycheck to paycheck.
And yes, part of this is because inequality is increasing and most of the gains are going to the rich. But this alone wouldn’t be a disaster; we’d get to Keynes’ utopia a little slower than we might otherwise, but eventually we’d get there. Most gains going to the rich means at least some gains are going to the poor. And at least there’s a lot of mainstream awareness of the problem.
I’m more worried about the part where the cost of basic human needs goes up faster than wages do. Even if you’re making twice as much money, if your health care and education and so on cost ten times as much, you’re going to start falling behind. Right now the standard of living isn’t just stagnant, it’s at risk of declining, and a lot of that is student loans and health insurance costs and so on.
What’s happening? I don’t know and I find it really scary.
The link you posted to the Oregon Experiment (link) doesn’t seem to match what you said about it. You mention it doesn’t effect health outcomes in a meaningful way, but the study’s authors state:
Those effects aren’t massive, but they certainly are “meaningful”.
Did you just disagree with the analysis of the authors?
I think that the cost disease in higher education is actually less severe than it appears. Tuition has increased tremendously, but tuition is not a great measure of how much college actually costs. Cost per student is more accurate because it’s a measure of how much money it actually costs to educate a student, rather than how much the student is paying, which as you mentioned could be much more arbitrary. Luckily, the story for cost per student is a lot less bleak. Using the same tuition data set you did and adjusting for inflation, it looks like tuition on average was $9,000 in 1992, but according to 1992’s OECD education report, the cost per student was $22,000. At public institutions, the cost per student was $18,000 in 1992 and $26,000 in 2014, which is not nearly as severe an increase as the tuition numbers would imply.
It doesn’t look like education has ever been cheap; it looks like it was just heavily subsidized. A cursory look at per-student spending in other countries shows costs similar to the United States, even in higher education. The US still spends considerably more than other nations, but not by a factor of 10 (for higher education it’s twice the average, for primary and secondary it’s even lower). German universities had a cost per student of about $11,000 in 1992, compared to $17,000 today. This is a smaller increase compared to the United States, but it isn’t that far off.
This isn’t really any comfort to students though; even if cost per student is increasing more slowly than tuition, tuition is what we actually pay, and an increase of a factor of 10 is really bad. What it does mean is that the “cost disease” is happening in the price tag, rather than the actual cost. ‘Universities are charging too much’ seems like an easier problem to solve than ‘higher education is mysteriously getting more costly.’
I think your idea that colleges charge what they can get is probably close to the truth. In Germany, where tuition is covered entirely by the state, universities have less of the administrative bloat and fewer frills than American ones, with much larger class sizes and much lower student engagement through extracurricular activities. Universities have no leverage to charge more because it isn’t up to students whether to pay those fees. In the United States, many students get federal aid, which means students can choose where to put the government funds allocated to education. Students on average spend thousands of dollars below the posted values due to aid and scholarships. What this means is that colleges can charge as they please, taking much of their income from federal money and scholarships rather than students. This money can be put towards all the wonderful amenities that American universities tend to have, which have to get nicer and nicer every year as universities keep up with each other.
Basically, tuition is a lousy indicator of education cost, and cost per student is not actually increasing as dramatically as it appears. Additionally, the average student doesn’t pay the posted tuition value. That doesn’t magically make tuition lower, but it does mean that the problem is probably easier to solve than it seems. Tuition is likely just increasing because colleges are charging what they can, and until something is done about that, costs will continue to rise.
Since individual hospital rooms are being used here as an example of excessive quality, Dhruv Khullar just wrote an essay for the NYTimes with links to studies claiming that they are cost-effective design, even if you care only about medical outcomes.
I haven’t read all 1000 comments, but enough to get the flavor. Here is the crazy one. The religious explanation, Haggai 1:5-10.
Nah, couldn’t be. Absolute craziness.
Your answer is declining EROI.
“Per student spending has increased about 2.5x in the past forty years even after adjusting for inflation.”
There’s no source for this? Or are we just supposed to take the CATO graphic? By the numbers at https://nces.ed.gov/programs/digest/d15/tables/dt15_236.55.asp there’s been an inflation adjusted 180% increase in per student spending since ’75, from $7,244 to $13,142.
Anecdote time!
My parents both work in the school system. One as a teacher, one as an administrator (father mother respectively).
One of the things my mother recently was talking to me about regarding her work was the expansion of paperwork. To go more in detail, the issue relates specifically to digital reporting software. Their school recently switched over to using software to manage a bunch of their government mandated data collection. After finally implementing this, my mom said that it almost doubled the amount of time she had available. However, the following year, apparently in response to the decreased workload (could be coincidence though), the data required from them ballooned. As a result, they now spend just as much time doing paperwork as before the software speedup.
This connection to large sprawling bureaucracies is something these cases have in common, and an explanation that still works in the private school example. I don’t think this is the only factor, but it definitely seems like a major one.
What about the patient experience of infected with HIV, ie. not dieing?
On shelter costs, I think this has more to do with the effects of overly restrictive zoning than anything else
A potential theory: this is because US uses the model “private producers, public subsidies” more than other countries, where publically subsidies are often replaced by wholesale public ownership. When the public subsidies private, profit-seeking enterprises (including in education and health), they have the incentive to keep bloating the costs and pass the tab to taxpayers; on the other hand, contrary to what many libertarians claim, entirely state-run institutions have the incentive to keep the costs lower, as they only have limited budgets, constrained by competing state bureaucracy interest. Just theorizing here.
For instance, in Finland, all universities are public – you wouldn’t even be able to run a for-profit university. During this government, the universities have been facing cuts, though they can now also raise some private funds. Finnish universities don’t have football teams or so on, and the range of extracurricural activities they offer is really quite modest compared to American universities, as I witnessed firsthand while in US for an exchange program.
Would lend credence to the idea that the best way to do things is either entirely private or entirely public, depending on the sector.
I have a hard time seeing the causal logic jump between having only public/private services in an industry and having lower costs.
Could it be that the causal link between public/private blend and higher costs goes the other way?
Suppose a service starts out strictly public/private. Then as time goes by the costs increase. The people involved in the market need a new source of funding in order to keep up services. People lobby the government to allow private finding/provide public funding. Soon you have a hybrid.
This flows a lot better for me then saying that hybrids are naturally subject to excessive price increase.
> I have a hard time seeing the causal logic jump between having only public/private services in an industry and having lower costs.
Private industry generally means profit-driven. Greater income, all things equal, means higher profits. So it is in the interest of private industry to raise costs to justify that higher income. In the absence of price competition, or the ability to vastly expand the market, this interest dominates.
Most modern businesses are run by people who have done an MBA, so they know to stay out of price wars. So limited competition by itself does little to counter this effect; 1/3 of 10x the money is more than 2/3 of 1x.
The more serious the businesses are about making profits, the more costs rise. A lot of European companies can be nominally for-profit while not really caring how much profit they make, so their costs rise more slowly.
Some US companies like Apple, Amazon and Space-X are more ego-driven than profit-driven. Such businesses can reverse cost disease, ending up the industry leaders as a result. What’s not so visible when that happens is that they would probably have made more money as one of several a players in the old high-priced market.
The comment that I was referencing seemed to imply that having a market that was part public and part private leads to worse outcomes then having all one or all the other. It was that argument that I was asking for clarification on.
I worry that your model of “Higher costs -> higher profits, therefore businesses seek higher prices” is overly simplistic. Sure, businesses want the price of what they sell and alternatives to what they sell to be high, but they want the cost of literally everything else in the economy to be low. Furthermore, they often have negotiating power to push their suppliers to accept lower prices. In the end shouldn’t this balance out somewhat?
Not only that, but if rising prices were the product of profit maximization a 4X increase in price would correspond to profits being 75% of gross income. I am fairly sure the real numbers are not even close.
Furthermore, if cost disease is simply a matter of profit maximization why should it affect the contemporary US more than other times and places?
The money is going somewhere, but profits can only account for a slice of the cost increase.
It’s not that the money goes to profits, it’s that a nimble and efficient profit-driven company follows financial incentives faster than a slow lumbering public-sector organisation. Whatever technology development, restructuring or marketing is needed to follow those incentives will be done faster, with less attention to the preferences of current workers (who probably want to just cure people/build tunnels or whatever).
If the incentives lead to low prices, an efficient company gets to lower prices faster. And similarly for high prices.
For example, for a ‘market’ based on negotiated contracts, as opposed to build-then-sell, is commonly thought to favor high prices. Especially if the contracts are fixed-price, and the purchasers get evaluated on cost over-runs rather than absolute cost. In such a market, the company that goes bust is not the one that is trying to sell things at a price people won’t pay, but the one that underbid and is suddenly liable for 5x their revenue.
Let me suggest a few reasons that public-private partnerships are more problematic than either public or private institutions on their own:
* Anti-competitive advantages: Public-private institutions are almost always granted significant competitive advantages, if not outright monopolies in their field by fiat. (This is often the reason for the public intervention in the first place.) This eliminates a great deal of competitive pressure and allows them to be significantly less efficient than their competitors, neutralizing the primary market force that makes private institutions more efficient than public ones. This largely negates the entire point of privatization in the first place.
* Regulated profit margins: A bureaucracy will naturally attempt to grow itself as per Parkinson’s Law. But a bureaucracy which turns a profit for its participants has a higher incentive to swell: if the owners of the organization are permitted to, say, extract a fixed 5% profit margin by law, they have every incentive to bloat both costs and expenses as high as possible so that they’re getting 5% of a much larger pie.
* Subsidized risk: Organizations with privatized gains and publicly-subsidized losses have every incentive to take risks which would be irrational for a private organization; they have no “skin in the game” because the taxpayer is on the hook for their mistakes.
* Subsidized demand: Private organizations normally have to carefully balance price increases with demand in order to avoid losing customers. If the government subsidizes demand, the private stakeholders are free to raise prices indefinitely, as the customers are no longer price-sensitive: whatever they can’t afford will be covered by the taxpayer. (This is the single largest factor affecting higher education costs, in my view.)
Note that some of these complaints apply to purely public institutions as well, particularly monopoly inefficiency. The main difference, in my view, is that public institutions tend to be more closely scrutinized for cash flow outlets; there are usually fewer opportunities for the individuals within a public institution to siphon money off into their own wallets and more consequences when they do, since it is clearly a violation of public trust and not a privately-made business decision. I know this is going to be a controversial statement with a lot of posters here, and I could write a whole post on why the government actually gets a bit more blame for inability to control costs than it should. But the short version is that it’s at least as much a function of inefficiencies of scale as any sort of mismanagement or malfeasance.
I think that fails to address the issue of why other countries have less of a problem in this area. Does France really have less administrative bloat, Sweden more competition?
Top academic economist John Cochrane likes and discusses this post. He writes in part:
“So, what is really happening? I think Scott nearly gets there. Things cost 10 times as much, 10 times more than they used to and 10 times more than in other countries. It’s not going to wages. It’s not going to profits. So where is it going?
“The unavoidable answer: The number of people it takes to produce these goods is skyrocketing. Labor productivity — number of people per quality adjusted output — declined by a factor of 10 in these areas. It pretty much has to be that: if the money is not going to profits, to to each employee, it must be going to the number of employees.” …
“So, my bottom line: administrative bloat.”
“Well, how does bloat come about? Regulations and law are, as Scott mentions, part of the problem. These are all areas either run by the government or with large government involvement. But the real key is, I think lack of competition. These are above all areas with not much competition”
He does not seem to be realistically grappling with this. He is not offering any proof that regulation and administrative bloat are 10 times less expensive in other countries.
He doesn’t even really offer any proof for assertion that administrative bloat can make things cost 10 times more. He hand waves, asserts the problem is regulation reducing competitiveness and confirms his biases.
FYI, “Inflation” isn’t really something you can just “adjust for”. The reason for this is that inflation is calculated via bundles of goods – basically, you take a bundle of goods in year X, then another bundle of goods in year X+1, and you compare the value of those goods.
This works okay for year to year stuff, but it starts getting really bad if you compare across larger time spans. For instance, if you compare a bottom end 2016 vehicle to a comparable 1990 vehicle, you’ll find that “inflation” over that period on the cost of the car was only about 20% – an increase from about $10,000 nominal to about $12,000 nominal (note that the bottom-run 2016 vehicle has features not even available in a lot of 1990 vehicles, and better safety, ect. but it was the closest I could get).
Meanwhile, the average cost of a new car has increased by a lot more than that – but are we really comparing apples to apples here when we’re comparing cars which got much worse gas mileage, had fewer featurs, no AC, ect. to modern vehicles? Probably not, and it exaggerates inflation (and is also why wages “appear stagnant” – they’re not, they’ve gone up a lot).
But this all gets ignored and we just treat it all as inflation. As it turns out, this (significantly) overestimates inflation, and gives us a poor idea about how prices have really changed. Some things have gone up quite a bit in price, but if you look at a lot of things – TVs, video games, movies (that you buy and take home), computers, ect. – they haven’t really changed much in price for decades. Meanwhile other things have gone up a lot in that same time span but seen significant quality improvements, while others have just inflated massively without major gains.
Note that this really just makes things worse – the things which have been rising ridiculously in terms of cost are rising even more ridiculously in terms of cost, while many things (TVs, for instance) aren’t rising in cost at all. A 24″ TV cost about $170 in 1959 (in 1959 dollars!) – about the same amount that a 24″ full-color 1080p LCD TV costs today.
—
Anyway, to answer your questions:
#8: Nonwage compensation is actually increasing faster than wage compensation is. This is a significant factor. However, a lot of this is going to health care costs, so really, a lot of it is just inflation.
But I think the real answer is actually the one people don’t like: #2.
The assumption that markets work is based on a number of factors which, frankly, just aren’t true. Most people aren’t even remotely rational agents, and while the laws of supply and demand seem legitimate, in reality, it only is meaningful if consumers actually change their behavior. And while consumers will change their behavior in some cases (we saw major changes in driving habits after the last spike in oil prices), the reality is that this does not appear to happen (or indeed, even be possible) WRT: health care and education.
If someone tells you that it costs $500,000 per room to build a hospital, someone else tells you it costs $1 million, and someone else $1.5 million, most people will just be like “Well, okay.” How do people know how much things should cost? They really don’t in most cases; I can’t imagine most people build multiple hospitals. And people don’t really shop around for health care; we’re more or less told where we’re supposed to go. Education-wise, few people are going to change their minds mid-course (and the colleges don’t want people to go there for just a year to get their degree, so disincentivize such switching), and it is hard to know how good the quality of education is somewhere. I went to Vanderbilt; I’ve also taken some classes at Oregon State University. In terms of upper-level graduate-level classes, there wasn’t much of a difference; Vanderbilt was better, but only somewhat – but it sounded like while at Vanderbilt, only the very lowest level courses were these big faceless lectures, there were a lot more non-intimate classes with professors who sucked/didn’t care at OSU.
There was a signficant difference in student quality, though – Vanderbilt’s students were significantly better.
We also know from studies of parents that most parents don’t really know how to make good choices about education – they struggle to evaluate the quality of schools. In fact, even highly educated people struggle to evaluate the quality of schools – I mean, just look at all of our attempts to measure teacher quality and how laughably terrible they are. If experts can’t do it, why on Earth would we expect parents to be any better at it?
The idea that education even functions as a market in a meaningful way is deeply questionable – if you have almost total blindness as to the quality of the product, you only really buy the product a very small number of times, and you have little good ability to compare products, how can you possibly expect people to know quality? I’d imagine that most people think paying more money indicates a higher quality education, because more expensive things are better.
And the same is true of health care.
—
Incidentally, the subway costs thing is actually something more complicated: it has to do with regulations and rising prices of property. This *is* a problem which can be fixed by deregulation, at least to some extent, though it might require some actual regulation as well to try and tamp down on some of the crap that happens (i.e. ridiculous lawsuits which tie projects up for years).
The State of Oregon passed a law prohibiting cities from not expanding out to their Urban Growth Boundaries – you no longer have to vote on it, people can just do shit. People are fighting it tooth and nail, in part because it is going to lower housing prices (which of course, will devalue their real estate). We really need to be more aggressive about causing severe damage to these people.
Regarding the first point, there are so-called “hedonic adjustments,” though I have no opinion on how successful they are at capturing the values they claim to measure.
I think there is a major gap in this analysis. Money gets spent. It can be spent on education or healthcare or food or something else. If it’s invested in a business, the business spends it on stuff. Therefore the total cost of everything increases with total income. That’s not exactly true every year in an accounting sense, but it’s very close to true in terms of long-run trends.
Therefore, if you apportion costs into fixed categories like education, healthcare and food, the average rate of cost increase is going to be the increase in income. You can ask why some things, like education and healthcare, increase more than other things, like food, but not why everything is going up in price.
One solution is to introduce new categories. People spend more on communication in 2017 than they did in 1970, so you could say communication costs are skyrocketing. Or you could create new categories for streaming video and social apps and Wikipedia and say communication costs fell dramatically, and people started to spend money on this new category called, say, “network content.”
I don’t think it makes much sense to evaluate education quality by test scores or health care quality by life expectancies. A major shift since 1970 is the fraction of mothers of school age children who work. That changes the nature of what schools are required to do. And most visits to doctors are not to prolong life. They are to relieve symptoms, or to hasten recovery, or to get reassurance something is not serious. A 60-year-old in 2017 is much healthier than a 60-year-old in 1970.
The cost side is more objective, but in many cases, people are paying more, not to buy more education or healthcare, but to get them delivered more conveniently, pleasantly, faster with less worry. In 1970, you had to go to a physical bank to deposit your physical paycheck, and stand in long lines because everyone had to go at lunchtime on payday, and the tellers took the same lunch hour as office workers. Banks weren’t open evenings or weekends, and didn’t have branches in residential areas. There were no cash machines. None of that shows up in cost statistics for financial services, but the waiting times were a significant cost. People spent a lot more time waiting in line in 1970, and filling out forms, and dialing telephones, and going to the post office and DMV.
I don’t say that all this makes up for cost increases, just that you can’t get much insight comparing nominal dollar costs to crude quality measures.
Economics isn’t accounting and accounting isn’t economics.
The price of something increasing generally makes demand go down unless supply is restricted. Money has to get spent on something is very different from money gets spent on something and so the costs of things go up without any improvements in quality.
Forgive me if this point has been made already; the comments ballooned over 900 while I wasn’t paying attention this weekend. But isn’t a plausible explanation simply a version of the law of diminishing returns? Once you reach a basic level of competency/proficiency in most fields, any further advances become progressively more expensive. Life expectancy is a great example – the low-hanging fruit of infant mortality, sanitation, and basic nutrition has been plucked. The days of adding decades to the average life expectancy are over; everything else is at the margins. Because of that, the progress at the margins becomes massively more expensive when compared to the earlier era that had plucked the low-hanging fruit.
The same could be said of education, infrastructure, and housing. Once you have near-universal literacy, improve nutrition, reduce lead poisoning, and (I suppose) get the basic pedagogical structure in place for how to teach students, then everything else is just messing around at the margins – Smart Boards and classroom iPads and programming classes and educational BluRays instead of film strips. The old subway projects figured out the basic engineering of “put a train underground” and had the low-hanging fruit of less-developed areas. The new ones are safer, more comfortable, and also have to be built in already-occupied high-intensity developed areas. Once you get the basics of housing with indoor plumbing and electricity developed on open spaces, everything else since then is improved comfort, regulated safety, size, competing for space in a more highly developed area. The basic problems have all been solved; the remaining incremental improvements come at an ever-increasing cost for each successive marginal gain.
It seems to me that’s the likely physical explanation for what’s happening. The costs are more or less distributed everywhere – increased management bureaucracy in education and healthcare and infrastructure (to implement the marginal “improvements” in regulation, planning, non-discrimination, client support, etc.); increased numbers of worker hours (to implement electrical conduit instead of Romex or to build an extra story or to glass-in a steam shower instead of a basic tub or to reduce student ratios below 20-1); increased planning costs (e.g. more engineers to chart loads under extra buildings above the tunnels); newer higher-cost materials and equipment (to reduce environmental impacts, allow cutting edge medical testing and treatment, pass FDA trials for your new drugs). So you wouldn’t expect them all to fall in one sector like “skyrocketing salaries for one class of worker.” They’ll show up in every category, wherever there is marginal improvement over the already-mostly-competent earlier baseline.
This, it seems to me, is actually key to understanding Scott’s point about “Why don’t we just have the cheaper older alternative, this would practically eliminate funding concerns.” All of the sectors he’s talking about are areas where government is heavily involved (primary education is obvious; college is almost entirely federally funded at this point; infrastructure has a heavy government spending component and is highly regulated; and housing is controlled very significantly by zoning and building code regulations). Scott says “government vs. private doesn’t seem to make a difference,” and in those sectors he’s generally right, because it isn’t government provision of the services that is 10x less efficient (I would argue it’s some amount less efficient, but would stipulate it isn’t anywhere near an order of magnitude). Rather, it’s that government involvement in those sectors – particularly in our democratic culture – tends to mandate uniform minimum standards for quality and service provision. And due to the diminishing returns going after higher and higher fruit, as the standards rise, costs everywhere balloon. So the problem isn’t that government-run healthcare is terrible and unaffordable because it’s government-run and government is bad; it’s that between Medicare, Medicaid, regulation, and even tort law (note: also a function of government), the standard of care tends to be inflated to whatever the current state of the art may be. And the current state of the art will, inevitably, call for vastly more cost for only marginal gains. The problem isn’t only that schools are government-run; it’s that every child in America has a “fundamental right to education.” While education is certainly unequal due to local funding, nevertheless the fulfillment of that “right” tends to be defined in comparison to the latest standards, mandates, and technology – all of which are those same marginal-improvement cost drivers. The problem isn’t that government controls the construction of infrastructure; it’s that it’s no longer possible to build anything by the processes and standards of last century. Nor is it all explicit governmental mandates or regulations (though much is); social pressures, professional standards groups, tort law, construction contracts, etc. can all provide pressure to conform to the state of the art. But most of those pressures come about in connection with government standards one way or the other, even if not literally promulgated in the Federal Register.
What you would expect, if the low-hanging fruit explanation is correct, is less cost inflation in areas where there is less pressure to conform to the current state-of-the-art. And lo and behold, in areas with less government pressure to standardize, that seems to be true. If I want the computer that was state-of-the-art from 4 years ago, I can get it dirt cheap. Flip phones are out-competing smart phones in emerging markets like India. I can spend $4.50 on “state of the art” toothpaste with dissolved breath strips, fluoride, and peroxide whitening, or I can get a basic abrasive paste for $0.79 a tube. If you want the crummiest toilet paper, you can get it for a quarter the price of Ultra-Quilted Mega-Softy Cuddly Bear brand. But the heavier the government involvement, the more pressure – direct and indirect – to standardize quality. And standardizing quality tends to an upward ratchet that bakes in the new state of the art. The new state of the art, in turn, is inevitably vastly more expensive for very little marginal gain, because we already figured out the basics of how to do things in these sectors decades ago.
TL:DR – We’re accomplished enough now to know how to make marginal improvements, and rich enough to spend lots more money on some marginal improvements. Normally this is not a big problem because the whole spectrum still exists and the priciest options are confined to those relatively few buyers willing to pay for them, while more price-conscious consumers use the original versions. The sectors where there is the most Cost Disease tend to be government-dominated, not because government is particularly incompetent (or not just) but because government (particularly in the U.S.) tends to have an egalitarian, standardizing effect that pushes the state-of-the-art to be the new minimum, so that all consumers end up paying for it.
I have a few more specific links to share now. Again, this is mostly the result of higher-level decisions, or decisions made elsewhere within the US and the global economy. The sectors you highlight are mostly protected from external competition, and the US is running net external and internal budget deficits while under pressure to funnel as much money up to the top 1% as possible while shielding it from foreign investors, so these sectors become the natural money sinks in the economy for our domestic rich to inflate. The people who work in them aren’t the beneficiaries in terms of increased salaries and benefits, beyond simply having somewhat more secure jobs than the rest of the country.
Here are the more detailed links that get at some of the underlying structure:
American Thermidor (http://truth-out.org/archive/component/k2/item/53403:stirling-newberry–american-thermidor): An explanation of the cycle of energy deficits, import/export flows, and wealth concentration at the top that, as a side effect, leads to the inflation in costs you document above.
A World Half Free: Neo-Mercantilism and Trade (http://www.dailykos.com/stories/2006/1/25/181190/-): A more direct explanation of stealth protectionism, the new triangle trade, and the problems that trading between free and unfree economies creates. Again, this is a look at higher level structures that result in what you document in your post.
And further explanations and context here:
http://www.dailykos.com/stories/2005/12/5/169646/-
http://www.dailykos.com/stories/2005/12/6/169882/-
More available for perusal here, especially with a focus on the 2004 – 2006 timeframe: http://www.dailykos.com/blogs/Stirling%20Newberry
These were all written back around 2005/06, so references to politics and decisions of the time are a bit dated, but the underlying structures remain in place. In fact, Stirling was too optimistic about how quickly we would break free from the trap we’re in. Instead of a break in 2008 we got Obama, and instead of a break in 2016, we got Trump. Going to be 2020 or 2024 at this rate, with the likelihood of a real war becoming higher as we push real change back further.
(Parenthetically, these links also explain why you, Scott, have observed that social change is so hard but “biological” change (e.g. removing lead from gas, paint, etc.) is relatively easy. This observation is only true in the current era because we are actually living in a hugely stagnant and increasingly regressive global political economy. Basic change is not on the table, hasn’t been for decades, arguably most of a century, but it will be again as soon as the current system hits limits and we start crashing out. Since we’ve delayed changes for so long, emphasis on “crashing.”)
I guess I don’t really understand why it would be difficult to figure out where the money is going in a situation where we have increased costs for services over time. There are records, right? I don’t expect SA to personally engage in an accounting exercise, but this is just an accounting exercise.
I find statements like this to be particularly perplexing though:
Like, obviously, whatever result you might expect to get out of an education system, it would be crazy to think the result you were paying for would be measured by test scores. It doesn’t make sense. Even if we’re doing an efficiency assessment, the most important number is not going to be the average test score (this really hurts my head) but the quantity of students being cranked out of an institution with satisfactory competencies which the tests only exist to confirm. How many more students are we graduating? I would be surprised if it were 2.5x the number of students we were graduating in 1971, but I would also be surprised if the number was insignificant as well.
Note that Scott’s graph about education shows a huge growth in per-student costs. Now, it might be that having to juggle 2.5x the number of students makes the cost per student rise, but it’s not obvious why this should be the case; the obvious solution would be to hire 2.5x the number of teachers and then carry on as before, roughly speaking. But if I retool my factory to produce twice the widgets, but each costs me twice what it used to, something’s gone wrong.
Isn’t this just the “hidden inflation” that some economic commentators of an Austrian bent have been warning us for years about?
You talk about average wages, but not about growth in employment in these sectors. Since 1970 employment in these sectors went from 6% of the workforce to 15%. That’s very expensive.
Hanson says that healthcare is primarily about signalling care, not health. Caplan says that school is primarily about signalling good work, not education. If either or both of them are right, then the amount we spend and the number of people we employ in these industries is one of the worst things in the developed world.
Your essay shows that the increase in costs is not because of increased wages or increased profits. Economically speaking, the only possibility that remains is that it has occurred because of increased capital investment, no?
But what possible incentive might there be to promote capital investment with no apparent benefit for economic productivity?
I think one possible explanation is that institutions have a covert incentive to create as many jobs as possible without reduction in profit margins. The mechanism for this incentive to arise stems from the structural incentives of administrators and middle-managers in these institutions.
If you look at healthcare and education sectors as economic firms, the bloat in their respective workforces and capital base seems mysterious. If you look at them as the two largest life-rafts from economic refugees of globalization and automation, the bloat becomes easier to rationalize.
Increased rents is more likely.
I beleive that your last paragraph is on the correct track.
(Had to trawl through to the second to last comment to find one that jived)
You could have an increased regulatory burden, increased taxes, or decreasing productivity.
I found this speech on adjunctification in the humanities quite good at describing the problem, if not necessarily its deeper sources and/or solution.
Part of it is definitely the Molochy incentive structure whereby academic departments obtain the funding and sense of importance to justify the existence of tenure-track jobs precisely by offering far more opportunities for obtaining the credential (a humanities PhD) than actual jobs.
This seems much related to a dynamic described by others in the comments whereby it’s better for one’s prestige, if you’re in a managerial position, to hire several poorly-paid underlings than one well-paid equal. I guess the normal market mechanism to combat this would be: companies whose managers put their own prestige within the company ahead of the profitability of the company get outcompeted.
But the metrics for demonstrating an individual academic department’s contribution to a university are things like “how many PhDs produced,” not something simple like increased sales.
This seems somewhat like a disutility of scale, disconnect between inputs and outputs, as well as mission vagueness: universities, especially, are many different things to many different people (donors, students, professors, administrators, etc.) and get their money in many different ways, further creating a disconnect even beyond that of “students get subsidized loan.”
Though there’s still the bigger question of how, when almost everyone agrees that teaching is central to the mission of a university, they could now devote less than 1/3rd of their money to that mission, even as more and more money comes in. I guess it’s related to the above weird incentives and disconnects, but even then, it’s hard to explain. I guess little disconnects and bad incentive structures can, over time, result in mindboggling results no one wanted, but which were the inevitable result of allowing them to continue.
A critical factor is the differential growth of productivity in labor-intensive vs. other sectors. The huge increases in productivity over the past 50 years has led to real wage increases. However, productivity in some sectors, such as IT, computers, communications, and manufacturing, have risen much more than in industries that have not, or have been unable to, exploit technology–such as education. It still takes an hour to give a one-hour lecture, and professors still lecture they way they have since Socrates. But professors’ wages have to rise to keep them from defecting to IT, info, etc. sectors. That said, the lack of competition in education and some other fields has also played an important role in insulating education. Example: the ivy leagues are exempt from certain anti-trust provisions, which permits them to collude on things like tuition assistance and scholarship policies.
“education costs have doubled, college costs have dectupled, health insurance costs have dectupled, subway costs have at least dectupled, and housing costs have increased by about fifty percent”
“might markets just not work?”
You’re kidding, right?
The inflation rate for your examples pretty much follows the amount of government regulation or involvement.
Education? Highest. Health insurance? Really high — now. Housing costs? See zoning, rent control, etc.
But food? Some government regs, sure, but we don’t have government grocery stores (like education) and there’s not yet any ObamaFood. And that’s one example you gave in the charts that is not inflating at a rapid rate. See also what happened to air fares after the big airline deregulation…
Shortest summary: systems become dominated by the parasites who game them when no immune mechanism exists. America is stupider than other nations at protecting its institutions against such abuses.
Part of the increase in K-12 costs is probably increased total staff as a compensation for declining teacher quality. In 1970, the average woman who went into teaching had higher test scores (page 37) because there was far less opportunity for women in other professions than there is today.
The percentage change in test scores is not well-defined. Test scores aren’t scaled so that zero is equivalent to no knowledge- the scaling is arbitrary. So a test with a different arbitrary scale would have a different percentage gain. Of course, this is somewhat semantic, because achievement growth has been small relative to cost either way.
“Part of the increase in K-12 costs is probably increased total staff as a compensation for declining teacher quality.”
Teacher quality isn’t declining, but even if it was (I’ll deal with that in a second), more stupid teachers aren’t hired to compensate for the loss of one smart one. The idea is absurd. And there’s vanishing little evidence that smart teachers are better teachers, despite years of trying to prove otherwise.
Now, it’s true that fewer really bright women go into teaching, but the mean intelligence of teachers has only declined slightly over this time and it’s not a huge problem because the average intelligence of men entering teaching has increased. Also during that time, more blacks and Hispanics became teachers, and their test scores are lower, which probably explains the increase in the lower IQ range. And while research has disappointed on smarter=better, it’s done a bangup job of showing that black students in particular do better with black teachers.
Incidentally, it’s simply untrue that teachers aren’t terribly intelligent and most knowledgeable people don’t even suggest it anymore. Teacher credential tests have done much to wipe out a whole range of black and Hispanic teachers, which probably isn’t the best thing, given the research. High school academic teachers have always come from the top half of the SAT range, while elementary school teachers come from just under the average.
Thanks for the response and link. You mention several studies in your blog post that find a positive relationship between teacher test scores and student performance. And most of the studies you mention that don’t find a relationship aren’t actually looking at test scores (they look at graduate degrees, or principal observations instead of student growth, etc…). So I’m not sure why you say there is “vanishing little evidence”.
Also, I’m not saying superintendents are consciously compensating with quantity for quality- just that those trends have moved in opposite directions, which could lead to the near-flat net effect on test scores we see in the data.
The Bacolod paper shows the percentage of female teachers coming from above the 80th percentile of test scores fell from 41% to 19% from 1970 to 1990, and the percentage coming from the bottom 20% rose from 8% to 19%. That seems like a meaningful change. A change in men’s scores won’t compensate for that because they’re a much smaller group.
It looks like the Corcoran et al paper you mention finds a smaller decrease in achievement than Bacolod, so perhaps the difference between those two papers is part of our disagreement.
“There is some heterogenity across races – white students’ test scores increased 1.4% and minority students’ scores by about 20%.”…
Most likely this was the general improvement in minorities’ conditions around that time, giving them better nutrition and a more stable family life.
Illegitimacy increased during that period. And gains from nutrition were probably maxed out before that period started. However, the share of Asians within the minority category went up between 1975 and 1985. That should have raised minority scores for HBD reasons. Enough to explain all of that rise? Can’t say offhand, but it shouldn’t be difficult to calculate that.
Countries like South Korea and Israel have about the same life expectancy as the US but pay about 25% of what we do.
The more laissez-faire an economy is, the more economic activity it produces. But a lot of that extra economic activity is irrelevant or harmful to the real standard of living. This is likely why the US has a higher per capita GDP than most Euro countries, but about the same subjective standard of living.
The profit motive in healthcare leads doctors to prescribe meds and order procedures that patients don’t really need. It’s well-known that the US healthcare system spends more money on the last year of life than Euro healthcare systems. They try to sell more product, to drum up more business for themselves. These incentives are absent in state-run systems.
Also, US society is more litigious than most. Doctors do unnecessary tests and procedures to cover their asses in case of lawsuits.
Some people have pointed out that hospitals have switched from many-people-all-in-a-big-ward to private rooms.
You can’t apply that to the cost of building new NYC subway lines. The only service improvements I can think of in the 25 years that I’ve been using the subway are electronic cards instead of metal tokens, electronic displays showing when trains will probably arrive and wifi at some stations. Those are cheap, small changes.
Fifth, might the increased regulatory complexity happen not through literal regulations, but through fear of lawsuits?
Yes, I think that institutional, lawsuit-related risk tolerance is much lower in the US now than it was, say, in the middle of the 20th century. And that it’s lower in the US than in modern China, for example. How much of the increased subway construction cost was taken up by safety measures for the workers and safety testing for passengers? I doubt that the subway is much safer now than it was 50 years ago. It should be easy to get those statistics. But I can easily imagine the cost of safety measures ballooning.
I think that the increase in litigiousness and institutional risk aversion is related to the decrease in trust and cohesiveness. And that comes with increased diversity. Things that used to be settled informally or accepted without complaint (“life happens”) are now formally complained about or taken to court. And in homogenous, cohesive countries like Japan or China most of this stuff would still be handled informally. There would probably be social taboos against complaining, causing unnecessary work and trouble for others. And this was true in America at some point too.
If people don’t trust each other informally, their relations with each other get more institutionalized, formalized, legalized. And that costs money
It seems to me that part of the increased costs in education is that new technology doesn’t replace something else. We’re not adding computers and suddenly not having to pay a teacher. Technology is an add-on cost in this instance (buying + maintaining). Also, my teacher friend says that costs for standardized testing have sky-rocketed as these private testing companies try to squeeze everything they can out of the schools because they know they can. Also, I think, administrative bloat pertaining to increased adherence to laws, etc.
BTW housing and infrastructure cost increases are not a mystery — you can still build those same bridges, houses and buildings for more or less what it used to cost in real dollars, but no one would be allowed to live in them or travel on them, and few would want to (especially if you used, say, asbestos), plus you couldn’t spend huge amounts on environmental impact studies and other things society has deemed more valuable since those days.
One of the great unremarked graphs of our time is one of house fires by year per capita — they’ve been dropping steadily for 100 years because of rising standards, and the same is true of virtually all similar issues (except some created in part by those standards, such as the use of asbestos).
I’m given to understand that the decline in house fires is largely down to widespread use of drywall, which is cheaper than the old lath-and-plaster method.
Lath-and-plaster is not particularly flammable, the bigger problem was the newspaper stuffed in the cracks. But it’s not just building materials, risks from carpets and mattresses and other interior fire hazards have also been reduced over time.
I was curious so I Googled — apparently (if we can trust random contractors on the intertubes) it was a layered transition that started around 100 years ago. I didn’t find any good sources on historical flammability standards for insulation and consumer goods but I know they’ve changed considerably just since I was young.
http://www.contractortalk.com/f49/when-did-drywall-come-into-common-use-124105/
Smoking is a leading cause of house fires. Smoking rates have steadily dropped over the past couple of decades. No doubt this plays a role.
The house I live in is 100 years old, and the only things that it is made of that wouldn’t be used to day is some lead paint (painted over a few times) and nob an tube wiring (disconnected and rewired a few years ago).
Wolman first invented his process 70 years ago (and of course it’s changed a lot since), drywall came into common use around 60 years ago, fiberglass insulation dates to 1932, synthetic carpets became popular around 60 years ago, major appliances have changed regularly… I could go on. Unless you’re building a cabin out in the woods somewhere, the building inspector is going to laugh at your 1916 building materials… and then make you tear it down.
Double wall brick house- Drywall is mostly cosmetic, windows would be an issue as far as insulation but a fair number of houses have 100 year old windows in them + storms which isn’t killing cots, fiberglass isn’t needed. Lots of things make it more comfortable (natural gas boiler rather than coal heat) but the things that you HAVE to have aren’t murdering you with their costs.
I don’t know if you can get a permit for a Franklin stove in a modern building, but it’s unlikely they would let you insulate your roof with 1916 materials. At any rate, the vast majority of people want central heating and central air and modern appliances, and that drives costs.
At the same time all those things are actually getting cheaper to make — even brickmaking has probably seen its share of advances — but not fast enough to keep up with the additional demands for functionality by customers and regulators. The other main driver of costs seems to be unit labor costs, but that cuts both ways since better-paid laborers can afford more housing.
And of course that only covers new construction, a lot of housing pricing disparities are driven by zoning decisions that prevent new construction from absorbing housing demand in many areas.
The fire hazard permitting has become a bit ridiculous. When I had my first condo built in 1999, I was not allowed to move in for the first month because the inspector had decided adjustable vents on the ceiling were a fire hazard because a small child might accidentally close the vent with a ball. The delay increased the cost to me by thousands of dollars, and I was told it was not atypical.
Roof is un insulated as it is (well the large air gap that the attic traps acts as insulation). Wood burning stoves are still legal in many places.
We can get a multi split system installed in our house for ~ 10% of its purchase price, our appliances (stove, refrigerator, dishwasher, chest freezer, hot water heater, washer/dryer) probably combine to add ~3-4% to the total purchase price. We had the whole house rewired to replace knob and tube for 5-6% of the purchase price.
Insulating the attic is ~1% of total purchase price.
Long story short the actual cost of the modern parts of our house are fairly low, most of the housing costs appear to be land driven when I look around (an empty lot 3-4 blocks from us and the same size is listed at 75% of the cost of our place).
With regards to houses, at least, this is not true. Although building codes are stricter than they were a couple of decades ago, the difference is not huge, and can’t come close to explaining a 10x discrepancy.
What do builders of residential homes have to do now that they didn’t before? Well, they need to put in insulation (mandatory R-value) – but blown-in cellulose and fiberglass batts are cheap, and so is the labor to install them. This usually pays for itself in a couple years of lower heating/cooling bills, anyway. They need to have more dedicated electrical circuits than they used to, and many of these now have to use GFCI outlets and/or AFCI breakers; all told, this might cost a couple hundred dollars at most on an average-sized home.
On the other side of the coin, there are many advances in residential construction over the past few decades that have helped to save materials and labor. No one uses lath-and-plaster any more; it’s all drywall now, which requires far less (and far less skilled) labor. Outside of a few oddball jurisdictions (Chicago, I believe, is one), BX electrical cable has been replaced with less expensive Romex. Optimum value engineering means that builders are less likely to work by old rules of thumb, and more likely to ask what structural elements are physically and legally required and which can be omitted. And let’s not forget that most laborers have lower wage rates now than a couple of decades ago.
So I find the claim that increased code standards have led to higher housing prices to be completely unpersuasive. NIMBYism is a more likely culprit.
Re higher ed.
Yes the causes of the large increase in costs is multiple and all of the various arguments have some merit, including the Delta Cost Project’s noting of higher administrative costs. Plus the rock climbing walls and so forth. I get where you are coming from with flat salaries but I would still think that some of the bad bad admin cost increase we bemoan do not relate only to staff for rock climbing walls. Some pertain to support for instruction and as such can be thought of as additional costs that tend to support Baumol’s premise. I will also mention that I think the tuition cost increases you use to drive the analysis are gross sticker price tuition increases. But as tuition has gone way up so has tuition discounting (and federal aid), meaning the rate of increase average net cost is substantially flatter than the rate of sticker price increases.
It’s still a cost if taxpayers pay for it.
Oops good catch on the external aid. But the larger issue by far is the internal discounting of tuition by the institutions themselves. That is not a cost since the institutions do not spend what their nominal tuitions would indicate but only their net tuitions after discounting, which can easily get to 40%-50% of sticker price.
I’m thinking that a lot of this can be explained by the Pareto Principle (80% of the effort 20% of the reward, and 20% of the effort for 80% of the reward, except we’re way past 80 20 now, and it’s more like 99.99% to 0.01%), plus a greater amount of empathy in our society and a lower risk tolerance. The schools, hospitals, and houses that of the past worked pretty well for most people. But if you’re one of the unlucky ones that they don’t work for, life was pretty rough.
Learning disability? Expel them. Chronic Disease? Let ’em die. House full of lead paint? Eh, everyone’s got 8 children anyway, who cares if one of them gets brain damage. Construction? 22,000 people died in the construction of the Panama Canal, and that was considered acceptable. That seems to be the general attitude of people in the past (at least according to the super anecdotal evidence of what it’s like in books I’ve read).
This might not show up as any one particular budget item. There’s probably no “safety” budget, for example, just a general expectation that safety is important, no matter the cost, and maybe some highly paid consultants working diligently to make things even more safe, by adding yet more paperwork and procedures and expensive equipment to every single thing we do.
Overall it’s a very positive thing for society that our empathy for the vulnerable is increasing. But in a strictly utilitarian sense, it is a bit inefficient to spend billions of dollars pushing up safety levels just slightly higher, rather than just giving the money to people directly.
Oh, and the same sort of Pareto Principle problem also comes up in finance. You can make a pension fund that’s solvent for 95% of the years, no problem. But if the fund is supposed to last forever, that last 5% really becomes a problem. You can’t rely on stocks, or any other high-expectation-but-risky investment, so you have to rely on “sure bets” like government bonds, and then you need a massive fund to pay out enough.
Likewise, in health insurance, you can charge a reasonable price for insurance that will cover the majority of people the majority of the time. But what about that one person who needs 24/7 attention from a skilled team of health professionals, plus dozens of different drugs? That one person blows up the costs for everyone else. With Obamacare, there’s no lifetime limit on how much insurance can pay out, so they have to budget for a chance (even a very small one) that you might end up needing literally billions of health care costs over your life.
Ok, so here’s my view on this.
During the industrial revolution, the economy ran almost entirely on the backs of the working class. If you wanted to make something, you had to employ A LOT of people. As a result, value (and I mean the abstract idea of true economic value) was extremely dispersed throughout the economy. Each person could add a lot by joining the workforce, and likewise reduce the value of the economy by leaving it. As a result, a product, be it education or housing or movie tickets or cars, would gain the most profit by including the most number of consumers. Each person had a lot of value and thus lot of buying power.
Fast forward 50 years and humans have simply lost a lot of value to the economy. With decreases in inefficiency and the rise in automation and cheap labor, people have less real value and less buying power. The real value in the economy now lies in equity. As the economy slowly becomes more and more self sufficient (as in, people don’t need to do as much) value resides more and more in those who own the output of the system.
Today, goods don’t get the most profit by spreading prices to include as many people as possible. They get the most value by targeting the highest value consumers. Colleges don’t need to be priced toward everybody anymore because everybody doesn’t control value. Colleges get the most bang for their buck by being priced extremely high so that only the extremely wealthy can afford them. It’s not that they intentionally target the rich. It’s that it’s a highly scarce good that some people can and will pay exorbitant prices for so the price naturally adjusts. Same with housing. Same with healthcare.
This has become my working model over the past year. And with automation and AI set to take off, this won’t get any better. In fact, this will get much, much worse.
An example of how a rich society has seemingly-necessary expense that a poorer society doesn’t: In the american military, they fill up magazines to the full 30 bullets (which wears them down), then throw them away after each use. In the Israeli military, they only put 29 bullets, and reuse them. This seems like the sort of expense that could generalize to cause cost disease.
I’m fairly certain the American military doesn’t throw away rifle magazines after each use(!), though I could believe they’d accept a higher wear rate.
To my understanding, in a combat situation mags are treated as disposable. When your rifle mag is empty, you flick it out on the floor while reaching for a new one, rather than taking the time to stow it in a pocket first. Given that fast reloads can easily be a significant factor in survival, this is a cost effective strategy. If time allows, you can always collect up spent mags after the area is secure, but losing a couple after a firefight isn’t really a serious problem; the mags don’t cost all that much more than the ammo inside them, and actual combat is relatively rare. During range training, mags are obviously easily recoverable.
The usual rationale I hear for down-loading magazines is to increase reliability, not to control costs, and usually in the context of more esoteric weapons from the 40s to 70s. Designing a 30-round mag that actually, reliably can hold 30 rounds isn’t magic, and modern mags shouldn’t have any wear issues from routinely being fully loaded. I would imagine most mags are damaged or worn out by landing badly on concrete or metal flooring, resulting in damage to the feed lips or cracks/dents to the mag body.
[Warning – response only to article, have not read through other comments to see if my comments have been made redundant]
I think the author has picked up three different flawed markets:
(i) US Healthcare market
(ii) US Education market
(iii) US Housing market
They are all malfunctioning, but for different reasons:
(i) US Healthcare comes down to principal-agent issues, liability focus and the fact that – for political reasons – it’s more *healthcare provision* than *healthcare insurance*
(ii) (Higher) Education is now and has always a prestige good – it’s a mistake to look at grades or expect increases in money to equate to increases in position – universities have to spend in order to retain their prestige rank – and the people paying to attend to retain their prestige rank in the labour market.
It’s not about the “cost of education” – it’s about the cost to put yourself ahead of X people – and that obviously depends on how much money those people have (many of whom – in the US – will be on scholarships for being showing ‘great aptitude’ – so it’s not even *their money* you’re competing with).
(we *might* be looking at a splintering of the higher education market into MOOCs which handle prestigeless training – and special super expensive Campus Experiences which mostly mark you out as a prestige employee (either because you could afford to pay for Harvard or because Harvard has already filtered you out as ‘being exceptional’)
(iii) US housing market is suffering from the same problems in Europe – the ones most obvious in the UK. Namely, it’s *very hard* to prevent current local home owners accruing the political power/clout/legislated power to determine which way house prices go. Oddly enough, they tend to say “up”.
To put it another way, Home Ownership is one of *the most successfully unionised and recognised industries in the UK*.
It’s a union so powerful and all-encompassing that it doesn’t even require any of the usual trappings that mark a union – because all of those things are a means of creating a consistent position and multiplying political power. When you already have a consistent position and all the political power you could ask for … traditional union infrastructure is actually just a liability (a lightning rod for discontent).
Nothing wrong with unions per se … but, like any trade association, there needs to be a political counterforce and political oversight to stop it turning into a monopolistic organisation for optimising rent-seeking.
Re: Housing, one factor that drives up prices is also the presence of significant amounts of minorities in America, and a blanket ban on discrimination. You can’t put “no $minority allowed” on a housing advert, but you can put the price in a range that most minorities can’t pay.
Does anybody have insight into how much of a factor this is in the UK and France?
Uh… no. Our taxes were not low back in the 50’s-70’s. The top rate was 91% from WWII until Kennedy, then fell only to 70% until Reagan brought them down to under 30%. Suddenly, we’ve had budget issues ever since, and the weight of that on society has kept increasing. We’re trying to sustain 60’s level benefits with 20’s level cheap taxes, and it’s not working.
Don’t underestimate how much of the cost increases you’re seeing are because much of the cost has shifted from the government to the consumer. When education was cheaper, it was because we were funding virtually all of the cost of public universities through state governments as an important part of making society as a whole better. Now, we have people arguing that if they aren’t personally getting educated right now, they shouldn’t have to pay taxes to support it. A large part of the cost of any individual hospital item is compensation for the ones who can’t pay but need to be helped anyway. For some hospitals in poor areas, this can be 50% of the cost. Transit is chronically underfunded, therefore service degrades, therefore it gets further underused, therefore underfunded…
This to me seems like the core of the issue, so to separate the flight of the rich from taxes and the higher burden on the rest of us seems like you’re begging the question.
You’re assuming that a higher tax rate automatically means higher taxes. It isn’t that straightforward. The kind of people with the kind of money that puts them in that bracket will just find hiring tax amelioration lawyers to be cheaper than actually paying the rate without protest. If you look at tax revenue as a percentage of the budget, it hardly moved despite radical variations in the tax rates over time.
How much additional tax revenue you get out of each additional per cent point of tax depends, but I have not seen any convincing evidence that it’s currently zero or negative. Evidence suggesting that the USA is well below the optimal income tax is presented here: http://www.epi.org/publication/raising-income-taxes/. If you could direct me to data for income tax as a per cent of GDP going back before 1970 I’d like to see it. I can think of many other reasons that total tax income as a fraction of GDP did not go far down as the top marginal income tax rates decreased, most obviously because the tax base was simultaneously broadened. I have seen how tax planning decisions for professionals and small businesspeople get made, and to me it often looks more like a combination of anti-tax rhetoric and spite (I’m paying 0.5% more on payroll? WELL I’M GONNA PAY MY TAX LAWYER TWICE THAT BECAUSE THE GOVERNMENT IS KILLING MY BUSINESS) than economic rationality. Marjorie Kornhauser has studied this for decades. This conversation has been infected for a long time. In Dickens’ Hard Times (1854) the factory owners threaten to kick their property into the Atlantic every time the government passes a new regulation. Of course, back then the regulations were things like a 12 hour maximum working day or a stop to child labour, rather than a 0.5% increase on payroll or income taxes. (Note that none of this should be taken as support for OP’s comment, which does not seem to me to respond to SSC’s issue unless coupled with the Marxist critique present elsewhere in this thread.)
Tax revenue as percentage of GDP is essentially flat since WWII. It is false that we had lower taxes in the Fifties and Sixties (they cover about the same spread, averaging maybe a point or so higher), but it’s also false that we could afford everything back then because we were taxing the shit out of the 1%.
Scott, help me out here because I’ve read a long article about the mysterious nature of rising costs in certain sectors as well as hundreds of bemused comments, and the article had no more than a throwaway paragraph saying that maybe rising inequality is a sign that the ‘missing’ money is ending up in the pockets of the super-wealthy elite.
I come from a left-wing perspective, so I hope you can see that to me ex nihilo, “the super wealthy are becoming much richer than was historically the case, also all of these important services are becoming way more expensive than they used to be, but the one does not explain the other” looks like an extraordinary claim. I would like to see more evidence presented that this is not the case before updating in this direction!
In particular, I can see that a large majority of the odd features you have picked out about these services are acting exactly as predicted in Das Kapital volume 2, where Marx studies the process of realisation of invested capital (ie, money spent on labour, materials, tools etc) as the principal plus surplus value in money form. In particular, some of his predictions were:
1. Gains made by workers through collective action in sites of production can be taken away again by the landlord, the grocer, the financier etc.
2. The difficulty in the realisation of capital will incentivise businesses to strive for monopoly positions (whether by government mandate, mutual cooperations, quasi-monopolies such as real estate, branding and advertising).
3. The tensions between the production of surplus value and the realisation of surplus value will tend to set certain sectors of capital against one another – for example landlords would prefer if workers were well paid, but had to spend larger amounts of money on rent whereas factory owners would prefer to pay workers as little as possible, and that includes low housing costs.
Later analysis in the tradition of Marx have noticed that financial capital these days is doing very very well compared to workers, but also compared to traditional industrialists. And four out of the five of your examples are fields in which debt and financing plays a very large role. It’s pretty easy to see that financial capital would be incentivised to make these things more expensive so that they can extract more money through larger loans and financing. (I’m not certain about subways. Are they typically debt-financed?).
Financial capital certainly has the economic and political power to push for this, and they don’t particularly care if they squeeze other holders of capital along they way. They are debt-financed fields in which large monopoly powers exist for one reason or another. And while I acknowledge that bureacratic bloat is certainly playing its role, I’m baffled by the relative lack of consideration of normal capitalist tendencies on this thread. As far as I can see it is the single most important factor driving up the costs of these services. Please present me with evidence that I am wrong about this!
Excellent commentary. This to me may be a rebuke on centrism: you just can’t allow capital to have access to the public purse, they’re going to start robbing us to the degree that we allow it. The left at least offers the state as a dialectical force against private industry, that is, as more of a fight than as a capitulation. In our attempts to discredit the inherent conflicts under capital, the modern state has given into a resignation that it doesn’t even intellectually accept. The inherent antagonism ought to be restored, and public-private compromises should be resolved into either the private sector or the public sector. And my God America, please regulate your financial institutions.
Given that directly-issued federal government loans account for 92% of the student loan market, it doesn’t seem likely that higher education costs are being driven by capital. Unless you’re accusing the US Treasury of being an usurious lender at that rapacious 4% interest rate. Artificially subsidized demand coupled with artificially restricted supply really explains everything in this case; it’s not a mystery how the money is being spent or where it’s ending up, either. Schools have hired more staff per student and are buying much larger and more expensive amenities. In some states, schools are also being forced to spend more per student than they historically did because state aid is decreasing (this is definitely the case in California; I’m sure it’s the case in other states as well).
You could possibly make a case that schools are financing unnecessary expansion on debt and that’s what’s forcing their costs up, but even if so, that’s an entirely voluntary choice on their part. I don’t think anyone can credibly argue that a university without a rock climbing gym and massage center is failing in its core mission and has no choice but to fall prey to rapacious lenders, and schools don’t need to borrow to account for reduced state aid if they’re simply raising tuition to match every year (which they are).
Moving on, the healthcare market is not primarily debt-financed, so it seems difficult to make a case for how capital is exerting undue influence here. Medical debt is considered a terrible investment in the financial industry because it is nearly impossible to collect on: it sells for less than a penny on the dollar. If capitalists are driving up healthcare bills to drive people into debt, they’re doing a pretty lousy job of it.
The housing market has a stronger case for capital involvement, since about 75% of the US residential mortgage market is privately controlled, but two things throw a wrench in your argument. First, while average housing prices are up, they have not risen in anything approaching a uniform manner. For example, the average price of a home in San Francisco is 785% of the 1980 baseline, while in New York it is 644%. Dallas — a city which has experienced a similar economic boom but has much less restrictive zoning and housing policy — is only up 255%.
If lending capital were the driving force behind increased costs, you would expect to see much more uniform rises across the board as loans became more expensive and lenders spread costs (and risk) around as evenly as possible to maximize their returns. Instead, what we see are spikes in urban areas experiencing economic booms and high immigration (high demand for housing) and extreme spikes in urban areas with highly restrictive zoning and development laws (limited supply).
The restrictive cities are experiencing artificial distortions due to capitalist interests, but it’s not the lenders doing it — it’s primarily the existing homeowners. In addition to the usual “I don’t want my neighborhood to change” NIMBYism, there’s strong financial incentive for existing homeowners to resist any new supply of housing entering the market, because it drives their home value up considerably. As long as entrenched landowners can control city politics (and in most cities, they do so quite effectively), they can strangle any potential competition and extract massive rents from the artificially constrained supply. As Matt Rognlie pointed out, Thomas Piketty’s finding of capital’s increasing value in Capital in the Twenty-First Century is almost entirely due to the increased value of land, primarily land in urban areas.
So if you want to address the capital problems in the housing market, a good place to start is with measures that reduce the power of landowners to extract the undeserved gains from their land (by which I mean increases in its value that are not due to real improvements to the property). A land value tax is the coup de grace here; it immediately seizes the unearned profits and redistributes them. The problem is that implementation of such a tax will come along with tens of thousands of sob stories about people forced to sell homes they can no longer afford, because their quaint 50 year old home is sitting on land that’s worth millions of dollars due to its location, and they can’t afford the taxes. While this is a good thing for the economy and society at large, even the most aggressive redistributionists usually start to blanch when the necessary outcome is kicking little old widows out of the home they’ve lived in for 50 years. So it’s not a political solution that’s likely to happen anytime soon.
Thanks for your reply. I will consider it more fully later but I do want to point out before I do that the private sector/public sector division is not the same as a capital/Not capital division – just because it’s the state doing it, doesn’t mean it isn’t ultimately capital and responding to the incentives that capital responds to. In the case of publicly financed student loans for example, the state still has an incentive to maximise the amount that students borrow. (I don’t mean to suggest this is the only incentive)
Apparently I was wrong, I was more curious than tired! So here I am back again sooner than expected.
Regards the education sector I think I broadly agree with you, the state as a capitalist actor notwithstanding. (It’s news to me that it IS understood where the money is going to – from the article and discussion I got the impression that this was not well understood.) I don’t understand why decreasing state aid would lead to higher spending? Where is the money coming from?
In regards to the healthcare market, though I agree with your point on medical debt in particular, debt is not the only form of finance. With the medical insurance that the majority of people have, financial institutions act as an intermediary between the buyers and sellers (patients and hospitals), and take a cut of the transactions. They still would expect in general to earn more of a cut when the total amount of transactions is higher, wouldn’t they?
I don’t understand your point about the distribution of price increases in real estate. In what way is this evidence for disinvolvement of capital? I would have thought that since real estate lenders are limited in their capital, they would want to put most of it into the places earning the highest returns, and would have the same incentives as anyone else to be investing in boom regions. Real estate, as you point out, has the unique distinction of the owners of capital being very broadly distributed leading to its unique characters such as NIMBYism.
In general I guess what is most evident to me is that financial capital, in all three of these areas, stand to be winners when costs increase (the customers stand always to lose, the providers face both increases to their revenue and to their costs). What I haven’t shown is some kind of clear proof that financial capital has influenced matters to raise costs, but I hope it’s not contentious that it has both the incentives and the means to do so. Given my priors, I will tend to predict that that is the case until presented with evidence against it.
One final note as it occurs to me, Marxist analysis often comes up against situations where the actions of individual capitalists act counter to the interests of capital as a whole. So for example, in the medical insurance case, a firm is incentivised to increase the total costs of medical care for patients they insure. In aggregate, after competition between firms, this may lead to (and has led to) a lower cut of the total spent amount. This of course leads to greater risk, and may culminate in a crisis such as 2008. Nonetheless, the incentives do seem to me to point in that direction. Some modern Marxist analysis holds that such crises are the rule, rather than the exception – or in other words that they are a part of the normal functioning of capitalism rather than some kind of exception to it, and I’m rather sympathetic towards that view.
Given that directly-issued federal government loans account for 92% of the student loan market, it doesn’t seem likely that higher education costs are being driven by capital
It’s still a cost when the government pays for it, and when you subsidize something you get more of it.
This irks me personally because I worked 40 hours a week to pay my through several years of my college due to the byzantine FAFSA process that determined my mother’s fatal cancer was some sort of Medicaid windfall, meaning I got to enjoy subsidy-inflated prices without a subsidy.
And yes, it is ridiculous that teenagers can commit to six-figure nondischargeable debt for a product of often dubious value, and as I said below if any other industry was doing so there would be talk of jail time for the execs.
Which super-wealthy elite? Be specific, including the mechanisms by which they have driven up the specific costs under discussion.
Scott’s analysis focused on cost growth in education and health care – two markets where most of the spending is done by or heavily subsidized by the government, where many of the providers are run by the government (public schools and hospitals), and where the private-sector providers are mostly heavily regulated publicly-traded corporations whose books are pretty much open and which don’t seem to be making 80-90% net profit. And as Cypren notes, these things aren’t being paid for by private debt.
Meanwhile, when I look at lists of the richest people or biggest corporations in the world, I find almost exclusively fortunes being made by selling cheap consumer goods or base commodities – which have not seen massive price increases and in many cases have declined in inflation-adjusted price. You can perhaps denounce these capitalists as Rapacious Evil Plutocrats for their exploitation of sweatshop labor overseas, but not for cost inflation in United States or Western Europe.
Rapacious Evil Plutocrats turn out to be really good at providing people with lots of nifty stuff on the cheap. Including those educational or health-care niches where the government is mostly on the sidelines, e.g. cosmetic surgery. When we say, “This market is too important to be left to the Rapacious Evil Plutocrats and so the State must take charge!”, then we see prices increase by a factor of five or ten. This I do not believe is accurately predicted by Marxist theory. But it is so.
All excellent points! The best allocations of resources always seem to be the ones we arrive at voluntarily.
The starving masses amidst a small cadre of comfortable elites seem to be found in precisely the places where coercive fairness of the distribution is emphasized over voluntary efficiency — Cuba and North Korea.
Starving masses in Cuba? There is plenty of reasonable dispute about Cuba’s economic prowess, but that is nuts. North Korea, yes.
North Korea, yes, fifteen years ago. Though much of the difference between then and now seems to be a Hunger-Games style tolerance of black markets.
Yes, Cuba’s poor really are living at or near starvation levels.
http://www.nationalreview.com/article/426334/look-how-cubas-working-class-lives-scott-beyer
“Those living in the slums suffered Third World conditions, deprived of food, soap, and toilet paper. Farmers there hauled equipment by horse carriage, large families lived in small shacks, kids played soccer barefoot on pavement, and — as I would discover firsthand — mothers offered their daughters to passing gringos.”
And the firsthand account above is not historically unusual, here’s an article from 20 years ago. Maybe you’re not technically starving if you just ate the neighbor’s cat, but most people would characterize it as such.
http://www.independent.co.uk/news/world/millions-of-cubans-facing-starvation-hunger-is-fuelling-an-exodus-of-desperate-refugees-writes-phil-1417691.html
“Under ever-tighter rationing since the collapse of the Soviet bloc, the poorest of Cubans began devouring the cat population last year. Even the tiny allowance of meat in their ration books is rarely available after they queue for hours at state warehouses.
‘Cats were among the first to go since they’re said to taste okay. I had my three robbed from my house in January,’ said Sylvia, a 32-year- old unmarried mother who lives in the Playa district and works as an administrator in a state hospital.
‘I’ve heard of people eating dogs, those little ones that have no fur, but I think that’s the exception. They say dog meat tastes bad and you still see plenty of stray dogs. Most people have drawn the line there. So far.’
…
The hunger of Cuba’s 11 million people has reached a critical level. Package tourists from Canada, South America or Europe are kept well away from it as they are whisked from airports to hotels in beach resorts such as Varadero. But Cubans without access to US dollars are starving as badly as their Caribbean neighbours in Haiti, the western hemisphere’s poorest nation.
…
They are allowed half a dozen eggs each a month. The only meat is part of a soya and mincemeat mixture but each is allowed only half a pound a month. Their other monthly rations are: rice – five pounds per person; black beans (a Cuban staple) – one pound per person; sugar – five pounds; salt – one ounce; rum – half a bottle. No soap or toothpaste has been available this year, and asprin is rationed to 10 tablets every three months.”
Well I’m skeptical. The problem is one could find narratives like this whether or not it was true, because there are plenty of folks willing to lie (or greatly stretch the truth) to make a partisan point. Much like one sometimes hears about poor Americans starving by radical leftists. But I haven’t heard this narrative before, so I will keep an open mind that it might be true.
I don’t know if that story is true or not, but a couple of the administrators from my university visited Cuba some years back and I heard their description. They were Jesuits, the school is soft left in attitude, they were on the whole inclined to be favorable to Cuba and said some positive things about it.
But they described it as very poor, and I’m pretty sure they said that their cab driver was a physician moonlighting because he needed the money.
I don’t think they got into the countryside, but I could be wrong. What I remember was talking about conditions in Havana.
Well, so far we’ve had socialists and anarcho-capitalists tell us that this proves their theories. Any way we can get a literal Nazi to show up too? An adherent of the Juche Idea?
I’m assuming you’re referring to me, I couldn’t find any leftier posts than mine; I just want to say hey man, I’m just a liberal.
I dunno about Nazis, but if you want someone on the extreme corner of the “Who Bitch This Is” and “There Are Five Lights”, the Death Eaters don’t find these findings surprising. The theories this is evidence for are “nobody in charge” and “imperial decline”, mostly.
(Disclaimer: /pol/ aren’t literal Nazis; they just believe the Holocaust was a lie, but it should have happened anyway.)
Judging from my time lurking on /pol/, I think they’d say that it’s mostly increased demand, with housing prices being driven by immigration and degradation of the family unit due to a marxist/jewish plot. Education is the same but with lowering of standards. Healthcare prices are due to processed foods and HFCS (the liquid Jew). Also combine all of these with parasitism from Jews in executive positions selling us Globalism by artificially adjusting inflation numbers to make us look richer than we actually are and cheap mexican/overseas labour for destroying the manufacturing industry.
They would probably conveniently ignore the construction costs, or handwaive it by saying Trump will fix it anyways.
For the record, I think economics, not ancapism, has a good explanation for why an entity that restricted supply and subsidized demand would be a likely suspect for contributing to prices rising over time.
Healthcare spending is a little different, because nearly all healthcare spending is marginal, with little effect on LE outside of vaccinations, and much is palliative or preventive. One of the great healthcare myths is that more preventive/diagnostic spending would save money in the long run, but that requires a positive identification at a much higher rate than usually obtains, or that most people would choose to pay for if they weren’t insulated from cost decisions. Do you really need that MRI? Well, for better or worse, you’re maybe 3x more likely to get one here in the USA than the rest of the OECD.
Higher education, relic of a time when you had to travel to where the books were, has similar problems, but the cost insulation is magnified by investor insulation — giving the seller access to financing with federal government guarantee of payment is a recipe for a disastrous price spiral, and if any other industry were saddling teenagers with non-dischargeable six-figure debt for products of often questionable value there would talk of jail time for the execs. The marginal benefit of a climbing wall or a massive pensioned bureaucracy to the process of learning the information in your degree program is dubious when sufficiently motivated students could generally learn the material on their own anyway (most students do not actually spend much time asking questions of their professors), and arguments for value in socialization or signalling to employers are unpersuasive given the enormous costs.
Rising income inequality is the great economic shibboleth of our time, all it really says is that resources are being allocated by a smaller proportion of society, and market-driven gains suggest the new allocation is more efficient and makes everyone better off (i.e., outside of some obvious cases of rentseeking, consumers are rewarding better value propositions). If you look at basic human needs, basic consumption inequality is generally dropping (partly because producers achieve those gains by expanding markets as broadly as possible) and increasingly the remaining consumption differences are in signalling rather than welfare (e.g. a nicer car, brand names). Would you rather live in the country where everyone is equally malnourished or the country where everyone is well-fed but only a few have Ferraris?
Small correction: $2.2B/km is the projected cost for the next phase of the Second Avenue Subway, if it ever gets funded/built. The section that just opened this year cost $1.7B/km.
Somewhere the ghost of Karl Marx is laughing. I think you’ve maybe captured the underlying reason that quantity is turning into quality ala Trump, the negation of the middle class (the negation of the negation?). I don’t really see there being a right wing solution to this: as more and more contradictions arise and the population bifurcates, more and more social issues will have to be state-ified.
Education for instance. As we barrel roll towards an economy that exports airplanes and imports shirts, more and more people are going to have to be educated to maintain such an economy. However, lower tradesmen, people in manufacturing, truck-drivers ect. are all depreciating into the lower-proletariat, and will thus no longer be able to afford the necessary education by which to pull themselves out of this condition. But social spending on education can resolve this to some extent.
The same for healthcare. More specialization and knowledge in healthcare means more healthcare providers trained in esoteric subjects, more esoteric machines ect. Someone of the working poor isn’t going to be able to see a spleen-oncologist.
The climate is crying out for coordinated action that just empirically isn’t taking place sans state intervention.
We’re going to have to be clever about redistribution, much more so than we have been: we can’t just brute-force tax without the accompanying capital-flight, brain-drain ect. I suspect we’re going to need a rise in labour unions for unskilled workers, closing loopholes, cracking down on wage-theft, more financial regulation, attempting to limit excessive credit, maybe even some socialization of investment.
I think left-liberals can still recommend a good degree of deregulation, especially if it’s needed. So if the ‘too much regulation’ thing is getting in the way, a liberal party can nonetheless pursue left-liberal interests while still tearing down some of the regulatory burden. Now, as a left-liberal I don’t particularly think this will do much… Well, other than cut down on credentialism, which I will grant is probably a legitimate issue. Anyway, I think centre-left social democracy might be the best thing we can do, even if that ‘solution’ isn’t exactly totalizing. Some of these issues, well, we might just have to live with. Sorry if that’s unsatisfying. I at least hope to hold up the centre-left tradition that seems to be less and less represented on your blog…
late edit. Maybe one reason productivity growth is slowing is zombie-firms; that is, firms that have a profit-rate that merely pays off the interest on their loans. Secondly, workers who are barely able to pay for their mortgage, student loans, various lines of credit. In some sense people are being sucked dry, leading to higher rates of default, bankruptcy ect. It can be a brutal, destructive squeeze.
These zombie firms may have to die, and the size of loans may have to become smaller again (Marxism would suggest that this might be impossible due to the rising organic composition of capital). My suspicion is that gains would be made if we moved towards a high-wage economy with heavily regulated credit industries, so people are less prone to overextend themselves, and so that lending becomes less predatory. Again, requiring left-liberalism over a right-wing alternative.
I forgot to mention that in The Wealth of Nations, Adam Smith thought that the cost of rent would increase as the cost of consumables went down; i.e., a kind of subsistence wage would be enforced by market laws on the lower classes regardless if productivity rose. Maybe consumables costs say, 60% of a monthly income 50 years ago, but now only costs 30% (too lazy to look up the actual costs, these are guesses, my apologies). Thus rent can increase in proportion to that. This makes capitalism infinitely depressing, but I think there’s a good chance that he was correct on this.
edit. this is a assuming the lower classes aren’t spending anything (or much) on health or education; these generally have to be cut from lower-budgets.
Could you specify where in The Wealth of Nations the passage you describe is? Smith is quite often misinterpreted by people for various reasons, so I would like to check on whether your description is correct.
If I had to guess, it’s something coming out of the iron law of wages, and the rent in question is not the rent people pay for their houses, it’s the rent on the agricultural land that feeds people.
The basic argument there, which appears in variants in Malthus and (most sophisticated) Ricardo, is that if income is above the level required to maintain the laboring population, population increases and that drives wages back down. Ricardo makes it explicit that the relevant wage is not subsistence in any physical sense but the wage at which the masses choose to produce enough children to just replace themselves, which depends on tastes as well as the price of inputs.
It’s a clever argument but it doesn’t seem to fit what has happened since it was made.
“All those improvements in the productive powers of labor, which tend directly to reduce the real price of manufactures, tend indirectly to raise the real rent of land. The landlord exchanges that part of his rude produce, which is over and above his own consumption, or what comes to the same thing, the price of that part of it, for manufactured produce. Whatever reduces the real price of the latter, raises that of the former. An equal quantity of the former becomes thereby equivalent to a greater quantity of the latter; and the landlord is enabled to purchase a greater quantity of the conveniences, ornaments, or luxuries, which he has occasion for.” Adam Smith, Vol. I, pg. 229ish
“In adjusting the terms of the lease, the landlord endeavors to leave him no greater share of the produce than what is sufficient to keep up the stock from which he furnishes the seed, pays the labor, and purchases and maintains the cattle and other instruments of husbandry, together with the ordinary profits of farming stock in the neighborhood. This is evidently the smallest share with which the tenant can content himself without being a loser, and the landlord seldom means to leave him any more.” Smith, Vol I., pg. 130ish.
Both quotes are lifted from Marx’s 1844 Manuscripts, citations may therefore be off in modern editions.
Yes, Smith does seem to be largely commenting on agricultural land. Didn’t realize that. But, I still think the argument can be transferred: the price of rent is resolved by both competition between workers and the conflict between tenants and landlords. As workers have more disposable income, so they can spend more on rent, and it seems like they do.
—-
In response to Ricardo (whom I haven’t read), productivity has been outpacing population growth, therefore wages can’t be entirely pressed down.
Ricardo has two potential defenses. The first is that he is describing long term equilibrium:
“Notwithstanding the tendency of wages to conform to their natural rate, their market rate may, in an improving society, for an indefinite period, be constantly above it;”
The second is that his natural wage depends on the tastes of the workers. If they are becoming more luxurious, the natural wage will rise, since workers will require a higher wage before they are willing to produce enough children to more than replace themselves:
“The friends of humanity cannot but wish that in all countries the labouring classes should have a taste for comforts and enjoyments, ”
The person whose prediction most clearly fails to fit the subsequent data is Malthus. His basic argument was that the income of the masses could never be much above what it then was for long, since if it was population would increase and drive wages back down. Since he wrote, the income of the masses of England continued to rise for about two hundred years.
Thanks for the Smith quote. I think the first makes implicit use of the iron law. The landlord’s produce will support X workers, so if X workers can produce more than they used to because of technological progress, the landlord’s produce will buy more stuff.
Smith knew a lot and wrote well, but Ricardo was a much better economic theorist.
The ridiculousness of the notion that climate and its consequences can be forecasted 50 to 100 years out with such accuracy and reliability to justify spending trillions now is rivalled only by the insanity of supposing non-OECD nations would ever take such action under any threat short of military occupation.
These predictions are the intellectual heirs of Ehrlich’s forecasts of mass starvation and the claims that oil and many other natural resources would be nearly exhausted by the year 2000, and emissions controls are the better-mannered descendants of population controls and other techniques of mass coercion dreamt up on the backs of bad modelling.
Well, even if we grant that the science is crude, and that long-term scientific forecasts can indeed be inaccurate or wrong, I still wouldn’t grant that therefore we ought to say all bets are off. Ok, maybe some prediction about sea-ice is wrong, but the state of the Earth does seem to be trending in the direction vaguely outlined by the sum of predictions.
According to the right, we should just cancel any pursuits of moving towards greener energy. I don’t think this is tenable; even if there’s a chance the right is correct, is it worth the risk? We have but one Earth. I’m not rolling the dice for some minor uptick in productivity or GDP. Carbon taxes and incentivizing green R&D is a major undertaking no doubt, but a necessary one, and we have macroeconomic tools that we can adjust when green politics threatens a hit to our economy (ie. carbon taxes with tax rebates — simply introducing incentives).
Cite, please? In particular, for the right’s desire to see the very substantial private-sector pursuit of greener energy be canceled, or for cancellation of any pursuit of nuclear energy.
Your claim is greatly overstated, and diminishes your credibility.
https://www.theguardian.com/us-news/2016/nov/22/trump-british-allies-against-windfarms-golf-courses-scotland
‘When I look out of my window and I see these windmills, it offends me. Nigel, Arron, Andy, you have got to do something about these windmills’
If you think there is some meaningful part of the ‘right’ that is unaffiliated with Trump, then ‘citation needed’.
Let me adjust the sentence.
According to some people on the right, we should just cancel any government-led or coordinated pursuits of moving towards green energy.
For the most part I find the right pretty indifferent towards private green industry; they don’t seem to care either way. If you look at the post I responded to, one can hardly say he’s itching that this is a real issue that the private sector will resolve.
And I’m not sure I can grab a specific intellectual who formulated this, rather it’s a pop-argument that floats around in social spaces.
@1soru1 – Isn’t wind pretty much the worst of the renewables? Noise pollution, stroboscopic effects to anyone in the shadow, tons of bird strikes, unreliable generation pattern, and so on?
Meanwhile, more nuclear energy when?
> Meanwhile, more nuclear energy when?
Dunno, you are the one’s in government.
1soru1 – “Dunno, you are the one’s in government.”
Would you agree that opposition to nuclear power has been predominantly a left-wing position, historically speaking?
The right-wing position on this tends to be fairly consistent: green energy is fine, but it needs to sink or swim on its own merits, not with lavish government subsidies.
Note that the corresponding lavish government subsidies for fossil fuels tend to be a sore point for the right-wing and a significant split between the base and the politicians. But I don’t think I’ve ever heard an argument from someone I would consider ideologically right-wing (as opposed to merely a partisan Republican whose only concern is political victory, and who wants lobbyist contributions), for why fossil fuel subsidies should exist.
As far as I can tell, we are talking about “the right” and “the left” because CecilTheLion brought up “the right” in implicitly characterizing talldave2’s position. Regardless of who supports nuclear energy or whether some Republicans are only ideological and others only partisan, does anyone disagree that talldave2’s position is one associated more with “the right” broadly speaking, both ideologically and practically?
@Cypren
Hasn’t “on its own merits” been mobilized against attempts to approximately internalize externalities in emissions-producing industries? That would blur the line between indifference and opposition.
@Jack – “does anyone disagree that talldave2’s position is one associated more with “the right” broadly speaking, both ideologically and practically?”
I don’t see why anyone would, no.
“Hasn’t “on its own merits” been mobilized against attempts to approximately internalize externalities in emissions-producing industries? That would blur the line between indifference and opposition.”
This is the rub. If we can’t agree on what the actual cost of a tech is, we can’t make meaningful comparisons. On the other hand, the status quo is what actually exists, and minus any subsidies that I personally would have zero problem removing, oil costs what it actually costs, while the proper price of CO2 externalities is a hotly-debated and politically fraught topic.
Nuclear is relevant because it’s a solution we’ve had for decades, one that we pretty clearly could have gotten consensus on and actually implemented, one that seems like it would be a pretty decent improvement over the status quo, and one that the side most insistent in solving the problem seems to have no interest in persuing whatsoever.
Right, but you have to be careful about even the assumption that CO2 is harmful in the long run. For instance, there’s considerable evidence that the Earth currently has just barely enough warmth and CO2 to sustain human civilization. Colder periods not only cause crop failures directly by temp, they also increase aridity and lower CO2 (possibly below the levels necessary to support mass agriculture of many staple food items).
1soru1/Cypren — “If you think there is some meaningful part of the ‘right’ that is unaffiliated with Trump, ” National Review published an entire issue titled “Against Trump.” I’m more of a libertarian, but don’t even ask what Reason thinks of him. Anyways, investment in alt-energy should be primarily market-driven, as subsidies generally just increase costs and waste (e.g. Solyndra/BeaconPower/ThisListCouldFillUpThePage) but realistically that is not going to happen even under Trump and a GOP Congress — check back next year and we’ll still be shoveling many billions of dollars in that direction.
“Right, but you have to be careful about even the assumption that CO2 is harmful in the long run. For instance, there’s considerable evidence that the Earth currently has just barely enough warmth and CO2 to sustain human civilization. Colder periods not only cause crop failures directly by temp, they also increase aridity and lower CO2 (possibly below the levels necessary to support mass agriculture of many staple food items).”
Mind if I ask for a source on that?
I think you gave short shrift to libertarian explanations of this phenomena. In particular, the Kling Theory of Public Choice may explain a significant fraction of cost disease: public policy will always choose to subsidize demand and restrict supply. If you restrict supply holding everything else equal, prices will go up. If you subsidize demand holding everything else equal, prices will go up. If you do both, prices will really go up.
(1) Healthcare: The government restricts the supply of all healthcare professionals (for example, doctors, nurses, CNAs, pharmacists, dentists, LPNs, etc.) via occupational licensing. (I should note that maybe everyone can get behind the simple idea that the number of doctors per 10,000 people in the US should at least remain constant over time and not go down, as it has.) It restricts the supply of healthcare organizations (for example, hospitals, surgery centers, etc.) via onerous regulations, like the very ridiculous “certificate-of-need“. You have already explained in previous posts how things like the FDA restrict the supply of generic drugs. In terms of demand, the government subsidizes health insurance via the corporate income tax code, CHIP, the Obamacare marketplace, Medicaid, Medicare, etc.
(2) Education: I have done less investigation into this sector’s regulations. You mentioned Title IX. David Friedman has some nice blog posts on how the American Bar Association’s regulations on law schools make cheap law schools impossible. (This same concept also applies to healthcare-related professional schools, by the way.) If Bryan Caplan is right about signaling, a lot of education involves negative externalities, so it should be taxed or limited by the government. Instead, it subsidizes demand via loans. K-12 education, meanwhile, receives massive subsidies from the government; everyone can enjoy a totally free K-12 education.
(3) Real estate: Land-use regulations restrict the supply of housing. (Explanations of this can be found by googling “Matt Yglesias housing”.) It also subsidizes housing via Section 8, various other HUD programs, Freddie Mac, the mortgage-interest tax deduction, etc.
In short, any industry that the United States government has a heavy hand in has/will experience cost disease.
Yeah, I think I’m an AnCap hammer that sees government nails everywhere, but the pattern for health care, education, and real estate seems to be heavily regulated supply and subsidized demand, with the resultant price increases you’d expect. Not to say other sectors aren’t regulated or subsidized too, but these three sectors seem to be particular heavy focuses of government intervention.
Is it really better to compare industry salaries vs average salary? Why not salary vs baseline inflation?
In other words, part of the US economy can deliver goods at steadily declining prices but not many jobs and another part of the US economy can deliver lots of jobs and costs are going through the proverbial roof.
I can’t help but wonder if part of the ever-expanding expenses isn’t that we mediate our interactions through the legal system more than we used to.
What got me thinking about it was what I’m working on this week. To answer Incurian’s question above about why I was posting during the workday, I was avoiding working on a report for selecting a contractor for a project I’m working on. (I owe the taxpayer about three hours this weekend, since I spent time on Friday here and reading about the Oroville Dam spillway.)
We had contractors submit proposals, and we had two structural engineers and a construction quality assurance rep sit in a room for three days, writing our individual reports about each proposal, then coming to agreement about how we rate them. Then I have to write a report summarizing all of our individual reports, which gets fed into the arcane machine that will eventually spit out an award. This process costs about $10,000, and had zero value for evaluating the proposals. However, it has to be done this way or we’ll get dragged around in court by an offeror’s lawyer if they choose to make a case of their rejection. I think that in years past they’d use a simple low-bid process, which has its own problems, or the rejected contractors would bitch to their Congressmen or something but wouldn’t literally make a federal case of it.
A sea change in how we see teachers has taken place as well: the principal and teachers used to be God in their classroom. A number of years back with some relatives at the house we got to talking about schools (my mother, aunt, and uncle were schoolteachers). The question came up about what would happen if somebody called a teacher a “motherfucker” years ago. My dad (who went to the same school I did forty years prior to me) said that the principal would have thrown the kid down a flight of stairs; he wasn’t being hyperbolic. He then told us about how Principal Brown basically threw a kid against a wall in front of an all-student assembly for giving him lip.
An author who went to school in the next town over in the early 1900s (Cully Gage) told a story of his principal who beat a classmate with an ironwood cane for some misbehavior. The kid’s father went to the principal’s house (these are really small towns, even today) to talk to the principal after school and ask him not to physically discipline his son; if there was a problem, to let the father know and he would take care of it. The principal refused, as he saw the discipline of the school as his domain. The father then laid the principal out with one punch right on his front porch.
This would all end up with people in court these days–and that’s on balance a good thing! But now when teachers want to discipline somebody (or get disciplined themselves) the measures are opaque, time-consuming, and (relevant to the OP) expensive. Even if they don’t hire a lawyer, the paperwork needs to be done to satisfy one in case you get called into court.
Might I suggest rebranding your proposal a little?
Just about every time I got something from Walmart in the past few years that was not something so basic that it would be hard to mess up I ended up regretting the purchase when it broke three weeks later.
I get what you are trying to say, but perhaps you could compare the establishment to a fancy clothing store like [name] and say we need a JC Pennies of medical care.
Just a thought.
Comparison with other countries got me thinking about “high standard of living”. What exactly does a middle-class Spaniard not get that a middle-class American does?
After you provide first world class stuff like indoor plumbing and drinking water, abundance of food, 24/7 electricity, relatively cheap long- and short-range transportation, basic literacy and numeracy education, basic modern healthcare, basic policing, etc, what exactly more could one want out of necessities? It seems to me like it’s mostly entertainment/luxuries and marginal improvements on the necessities.
From what I’ve looked at, there actually was a pretty decent gap in middle class accouterments until the mid 80’s to early 90’s where you could make the argument that the average middle class American was better off than the middle class Spaniard or Italian even including the various entitlements, but since then, it has really evened out where the only difference is that American houses are bigger, but more and more, I think that’s a combination of cultural differences and the available housing stock.
From what I’ve seen in Google maps, in Europe, even newer subdivisions being built still have smaller houses than American ones, despite these subdivisions being built in open areas.
Oh my gosh, there are a lot of confounding factors here.
First of, how much is cost disease America’s peculiar institution and how much of it has happened across the developed world? Surely we could learn from the nations where it’s absent, or happens least if none such exist. (OTOH, if it’s only absent in particularly alien cultures like Japan, our ability to change is likely to be limited, and if more homogeneous cultures have the big advantage, we need to double down on assimilation, and that assumes HBD is pseudo-science.)
Is women’s workforce participation a causal factor? The increases are correlated, but that doesn’t necessarily mean anything, especially if it’s an American disease.
We’ve become considerably wealthier in terms of stuff even as family income has stagnated or worse since 1973. What’s influenced by cost disease is skilled labor (teaching, doctors, subway construction), without that going to the skilled workers. So let’s figure out where that money is disappearing to.
My conservative intuition is that a lot of this is going to women-dominated fields that have ballooned in importance for legal reasons (psychology, compliance with PC codes that didn’t exist in 1971, etc.) and family income is stagnant because these women are in the two-income trap, but who knows?
I also wanted to comment on the famous “increasing inequality since 1973” factor, but didn’t have data handy.
“The best survey data show that the top 3 percent … hold over half of all wealth. Other research suggests that most of that is held by an even smaller percentage…”
Can anyone direct us to good sociological data on this class?
If you’re trying to measure inequality, wealth is a terrible way to look at it because of how affected by age it is. In a perfectly equal society where everyone makes the same amount of money, saves the same amount, and gets the same raise every year, the richest 1/5 will end up with 66% of the wealth just because of cohort effects and compound interest. Income is better, and consumption is better still.
This is an awesome post. I think the three things you mention above that make the most intuitive sense to me are:
>This kind of suggests a picture where colleges expect people will pay whatever price they set, so they set a very high price and then use the money for cool things and increasing their own prestige.
>…might we have changed our level of risk tolerance?
>…might things cost more for the people who pay because so many people don’t pay?
What about demographic changes that increase risk aversion? We shifted from dependents being young and in school to dependents on social security. IOW, who are the people who were not in the work force 80 years ago (i.e.) vs. who are not in the work force now? Here’s a link that shows the transition:
http://www.calculatedriskblog.com/2015/08/us-population-distribution-by-age-1900.html
Perhaps a greatly increased population of older more risk averse folks tilts the scales enough to make a big difference? For education, I think it’s primarily “signaling inflation’. The ease with which you can finance something and spread it out over a long time period aggravates this (like student loans). We are the insured society. I tried to find some statistics for how much we spend for insurance in a general sense but couldn’t find it. But, I bet it’s gone up a lot. Not just health insurance.
Maybe it was an issue of anchoring, creating a social-want, and then slowly increasing prices as demand became more inelastic? Maybe it was just brilliant; capital has essentially and quietly placed the burden and cost of training on consumers, which makes me sadface.
Here is a theory/line of reasoning that may explain these observations:
What if the problem stems from a defect in the way our culture approaches progress? Namely we are always trying to improve what we have, but we have a hard time walking back upgrades that cost more then they are worth.
I have only personal and anecdotal evidence, but hear me out:
Suppose you are in charge of police equipment.
Scenario 1:You have the chance to upgrade their radios to a more expensive model that have twice the range.
Scenario 2:You have the chance to replace the radios they have now with a model costing 75% of the current models, but have a shorter range.
YMMV, but as someone steeped in American culture I feel like scenario 2 would be a much harder sell unless the police department was under some serious pressure to reduce costs.
I suspect that what subway construction, education, and medical markets have in common is that they are all more susceptible to this “quality accretion” then other markets.
The real kicker is that I think these industries are susceptible to this effect in different ways.
Elementary Education and Insured Medicine have third party payment systems that disincentivize economizing.
Higher Education and Medicine are both industries where the quality of the primary product (marketable skills/not dying) is difficult to measure so they improve the quality of secondary products (luxurious living space, etc.) in order to signal “we can be trusted”.
Police and Military are industries where lack of equipment can end up killing someone, so any attempt to roll back equipment is portrayed as a betrayal.
Police and Medicine both lack clear upper caps on what constitutes “getting the job done.” (At least until we live in a world where no crime happens and nobody is sick or hurt.)
If this is really a cultural problem then no public policy can fix it directly. In the end people will need to change how they think, and be willing to accept something less then the “best” for economic reasons. And that is a very hard sell.
Would you be willing to tell a parent that medicine A has a 99.9% chance of success, medicine B has a 95% chance of success, but you should take medicine B because it costs less?
I don’t have a recommendation, except a partial solution in higher education.
Why not allow student loans to take the following form: If borrow $X to get a Y degree at Z university then you owe the bank, say, 137% of your annual income whatever that may be. (no interest. You pay 10% of your income for 13.7 years and then you are done) The formula for the final percentage owed should be transparent etc.
Someone getting a degree that allowed a high paying job could get a loan with a low percentage compared to someone with a degree that is not marketable, an economical collage may have a better rate then an expensive one. A collage with good job placement may get a better rate then a collage that doesn’t care if students are unemployed for two years after graduating. etc.
My hope is that if we can communicate “This is how much the degree is going to cost you” in a way that factors in prestige and the value of a degree to prospective collage students we can help some of them choose a more economical option. This will in turn incentivize higher education to cut costs in ways that don’t hurt the marketability of their graduates.
Thoughts?
I’d like to propose an explanation for cost disease, and why it is specifically an American problem. In every example @Scott cites, and one I’ll add below, a consumer chooses “the best” option from range of options. I think an American obsession with “winning” — or really, with “not losing” — may lead us to devalue second-best options. “Second-best” suppliers, facing low sales, must often choose to either compete for the top slot in their market, or the bottom, often excluding the middle entirely.
Consider, for instance, Google’s line of Nexus phones. In 2012, Google released the well-reviewed Nexus 4. They priced it at $350. This was very much a “middle of the line” cost, between typical values of $200 and $600. Sales, however, disappointed. In 2013, Google released the Nexus 5. It was again well-reviewed, similarly priced, and had similarly disappointing sales. In 2015, Google released the Nexus 5x. While priced again at $350, reviewers noted it was of slightly lesser quality…and sales again disappointed. In 2016, Google abandoned their line of Nexus phones, in favor of the “Pixel” models. These models are priced at the top end of the market: upwards of $600. Faced with low demand, the supplier exited the mid-tier market. In fact, the most (only?) popular mid-tier phone today is the hard-to-find OnePlus 3T, made by a non-American company with no American retail presence.
The key point here is that this whole outcome is, economically speaking, really dopey. These $400 phones – and even $200 phones – are really, really good. Naturally, they’re not quite as good as the $600 phones. But many folks would be much happier with the cheaper phones and extra cash. At the very least, enough folks ought to be happier to support some domestic market for these models.
I think we see this pattern across industries. I’ve never heard a friend say, “I could afford this more expensive education for myself or my children, but I’m going to go with the second-best option, because I’d prefer the savings.” Ditto for medical care, or buying a home, or renting one. Scott’s examples seem consistent with this. He himself declined a cheaper college education (and now seems to regret that choice); he describes a patient accepting an unnecessary psychiatric evaluation; he links to pet owner who can’t imagine their dog would be happy with a few pulled teeth, and a city government building unnecessarily spacious transit stations. In each case, the consumer failed to really consider the benefits of the second-best choice in each market, relative to the best; the consumer evaluated only those costs.
Other countries don’t have this problem, at least not in the same way. For instance, a good friend of mine was recently diagnosed with an uncomfortable, though not life threatening, condition. Without insurance, the sticker price for the treatment is around $50k/yr. As it happens, my friend is a national of a Western European country. They explored returning home regularly and receiving the treatment there through their national health system…and were shocked to discover that their European government health plan didn’t cover this treatment at all, and their private American health insurance covered it in full. The European government was perfectly happy to pay for “second-best” care and pocket the savings; the American insurance company paid for it all, and socialized the costs.
It’s hard to pinpoint why Americans shy away from second-best options. But I’m inclined to credit our national fear of losing. Consider the Atlanta Falcons, for instance. Nobody thinks of them as the second-best football team in the country. The Vice President doesn’t host them at the Naval Observatory. In a sense, they’re not even the “losers” of Superbowl LI…they’re just nothing. I wonder if Americans tend to apply this mentality to more arenas than sports; if our anxieties about not having “the best” push us to spend far more than we really want to.
Small wonder, then, that organizations across the economy find ways to spend more money, when we the consumers won’t spend less even if they try. I’m not sure how we could break this cycle, when there’s so little market for mid-price options across so many industries. But if we’re going to start anywhere, I suspect it will have to be in our own homes: by turning down the priciest options ourselves when we can, in favor of pretty-good ones.
As an American, this surprises me. I did this with my apartments ever since I moved out of my parents’ house, and most of the people I know have done it as well.
aka. the gross substitution axiom is incorrect, there really is no spill over. This also vindicates the idea that unemployment can be involuntary (how people could possibly believe otherwise is beyond me).
I think the k-12 education cost disease is easily explained in a way that higher-ed and health care are not.
First: The k-12 price increase is “only” 2.5x over 40 years, while higher ed increased ~10x, and health care costs grew 5x (according to the numbers in the article).
Second: while test scores of students in school have not improved, the fraction of the workforce that finishes a high school degree has measurably improved: The fraction of the population aged 25+ with a high-school diploma went from 50% in 1967, to 75% in 1986, and 88% in 2015. That’s a large reason for productivity growth in the U.S. during that time period: workers who can proficiently read English, and do arithmetic, and even a little bit of algebra are more productive than those who can’t.
Here’s the thing: we should expect marginal costs of graduating each additional student (without dropping test scores) to increase. And the marginal cost of each additional student has increased. This first 50% of students are “easy” to educate: they are raised by relatively well-educated, employed, and motivated parents, are native English speakers, are not malnourished, and have no language, learning, emotional, social, or mobility deficits. But now we’re graduating students who enter the school system unable to speak English, student’s with learning disabilities, student’s who need the help of social workers or psychiatric workers to reach graduation.
And @massivefocusedinaction’s post above includes a graph that agrees with the idea that the marginal cost of education each additional student is a large part of the cost growth. In that graph, “non-teaching staff” has grown 138% over the period when drop-out rates were falling from 50% to 12%. This staff is largely specialist social workers, psychiatrists, learning and language disability specialists, along with aids for students with the most severe physical challenges. Teaching staff has also grown by 60%, and that may be because class sizes need to be smaller to achieve reasonable outcomes when a small fraction of the students need a lot of extra help.
From a legal perspective, the United States has codified into law that everyone has a right to a k-12 education, no matter the marginal cost of providing that education. Brown v Board of Education, the Rehabilitation Act of 1973, the Individuals with Disabilities Education Act of 1990, and the No Child Left Behind Act of 2001 (among many others) have all contributed.
K-12 education costs have increased because there has been a political decision in the United States to use tax dollars to educate students with higher marginal costs.
ETA: looks like @fc123 made a similar comment about marginal costs while I was composing my comment.
Do you honestly believe that students go in to high school not knowing how to do arithmetic yet come out with a good understanding? If you can’t proficiently read English after eight years, I don’t think the extra four are going to help. The reason we have more kids graduating is because they dumbed down high school.
I’m just agreeing with Scott’s data: Average test scores have neither increased nor decreased for students who are in school, yet the fraction of students who are in, and who finish school has increased. That interpretation is also consistent with wage data: The wage premium for a high school diploma has held pretty steady for the last 40 years.
Okay, if you say so. Show me the data.
i’d look instead at policies of purposely advancing children instead of holding them back
i don’t know if data can predict this, but maybe a good measure would be children held back correlated with standardized test scores and how that has changed going forward. I.E. using standardized test scores as a marker of learning, and then using that to see if lack of learning causes children to be held back as much. Ya dig?
Well, you don’t have any data either.
Telling us graduation rates increased is not valuable unless we have some sort of way to know that graduation standards have been held constant.
Scott ended his post with
I was trying to help Scott explain Scott’s data that test scores have neither increased or decreased, but that costs have increased 2.5x. I gave a plausible explanation for that: government policy demanded that we increase high school diplomas significantly. We have data on exactly why the costs have gone up 2.5x: because we had to hire a bunch of specialists to deal with the students that are harder to educate as shown in @massivefocusedinaction’s graph.
In addition to Scott’s data showing test scores have not gone down, I also gave a link to data showing that private industry gives approximately the same real-dollar value to a high school diploma that they gave 40 years ago. Here it is again: https://trends.collegeboard.org/education-pays/figures-tables/median-earnings-gender-and-education-level-1971-2008.
I’ve proposed a model that does not disagree with the data we have (some of which was provided by Scott, some by @massivefocusedinaction, some by me, none by you.) My model is based on neo-classical Econ 101 supply and demand theory. (The federal government said that they wanted a high school education made available to a much larger fraction of students: that shifts the demand curve, thus the price must rise.) There’s no need to make the model more complicated to explain the data we have, so I left the quality reduction out of my model.
I understand that you dislike public educators, and public education, and would like me to help you say that they are bad, bad people who are purposely undermining the american way of life, or whatever. That’s fine, but it wasn’t the question Scott actually asked, so it’s not the one I answered. If you want to argue that public educators are bad, horrible people, please do, but I’m not going to do your homework for you.
Federal Researchers Find Lower Standards in Schools
An anecdote but illustrative of a wider trend. It’s both hilarious and depressing:
California school district lowers the bar for an ‘F’ to 20 percent
Are you sure you want to use that first link to the NYT as your “evidence”? It is an article about changes in standards between 2005 and 2007 under No Child Left Behind. It points to the National Assessment of Educational Progress, which I think might be where the number’s in Scott’s first graph came from. The summary from that report is:
The fact that a few kids might have been able to push the average up doesn’t mean they haven’t dumbed down schools. It’s easily possible that they could have focused their efforts on bringing the lowest kids up a little bit while neglecting standards for everyone else. And higher test scores mean nothing if they simply forget everything they learned. I gave you evidence of lowered standards. Do you have any evidence that high school actually makes students more productive? Because the disparity between drop outs and graduates can also be explained by signaling.
Your NYT article talks about changes in standards between 2005 and 2007 in response to No Child Left Behind (which started in 2002). Among the anecdotal data in the article is the statement:
Meanwhile Scott, @massivefocusedinaction, and I have been trying to answer an unrelated question and have provided time series data of measurements from the ~40 year period from 1970 to 2010. Scott’s data from the National Assessment of Educational Progress) suggests the quality of education for 17 year olds in school has stayed about the same over that 40 year period. Scott is worried about the 2.5x cost increase over that 40 year period. Most of the cost increase happened in the 35 years before your 2007 article. Most of the increase in graduation rate also happened in the 35 years before your 2007 article.
As I explained to @MattM: I offered a model to explain the data that Scott is worried about.
Here’s the data we have:
a. Brown v. Board of Education, the Rehabilitation Act of 1973, the Individuals with Disabilites Education Act of 1990, and the No Child Left Behind Act of 2001, all dictated that the states deliver k-12 education to a much wider group of students, without taking into account that those students would be much more expensive to educate because they have cultural, language, learning, social, emotional, or physical challenges that interfere with their learning.
b. @massivefocusedinaction’s post gave a graph that showed that the number of teachers during the 40 year period had nearly doubled, while the number of specialist social workers, psychologists, special education specialists, etc went up by a factor of 2.4x during that 40 year period. Scott had already provided data showing that individual teacher salaries haven’t gone up, but @massivefocusedinaction’s data explains that the entire cost increase is due to hiring more staff (and in particular: specialist non-teaching staff). Yet there aren’t significantly more school aged children, so the arithmetic says that the cost per pupil has gone up. So the question now is: why did all those people get hired?
c. While there aren’t significantly more school aged children, way more of those children are staying in school. And the additional students that are staying in school are harder to educate, thus require expensive specialists. The data that the students are staying in school longer can be found in graduation rates over the 40 year period 1970 to 2010: http://www.census.gov/content/dam/Census/library/publications/2016/demo/p20-578.pdf.
You, and several other commenters, have a deep emotional or religious attachment to asserting that the quality of public education decreased over the period 1970 to 2010. That wasn’t what Scott’s post was about, and it wasn’t what my reply was about. I said there were a lot more people getting diplomas. Saying that those diplomas are of lower quality is fine, but it makes Scott’s question harder to explain, not easier.
If we want to say that the quality of high school diplomas have decreased we need to explain the following:
Labor productivity in the United States increased somewhat during the period 1970 to 2010, by about 1% a year (http://www.nber.org/papers/w15834.pdf, look at Table 4 on page 40, labor productivity is the column labeled “MFP” (multi-factor productivity)). That’s consistent with having a workforce that is better educated. Labor productivity increasing doesn’t prove much of anything about changes in quality of high school diplomas, but we need to provide a mechanism to explain how the work force is increasing its productivity despite the fact that high school diplomas are becoming lower quality. I don’t know of such a mechanism.
We don’t have any direct measurements of diploma requirements (if I knew what such a measurement was called I’d have already provided a link to a time series from 1970 to 2010, but I don’t so I haven’t. I’ve been asking you to provide one, and you haven’t either.) What we do have is the National Assessment of Educational Progress which shows that despite the fact that a far larger fraction of 17 year-olds are being assessed (because a larger fraction of 17 year olds are staying in school than in 1970), the average score hasn’t changed much over 40 years. That’s Scott’s original data. You’ve argued that maybe the average has stayed the same but the variance has increased. That’s a fine argument, but we don’t have the data about the variance, and you are arguing that a significant number of 17 year-olds are getting a better education than they did in 1970. That seems … unlikely.
Finally, the wage premium for a high-school diploma has stayed constant at about $7000 (2008 dollars) from 1971 to 2008. If the quality of high-school diplomas has dropped we need to explain why free-market employers are still willing to pay $7000/year more to high-school grads than to high-school drop outs despite the fact that the supply of high-school grads is much larger (compared to high-school dropouts) than it was in 1970. I don’t know how to explain that either.
So: I am very, very, very sorry that I can not help you in your religious/emotional quest to prove that public educators are monsters dedicated to destruction of all that is Christian and good in the United States. I’ve done my best, and failed. Good luck in your quest.
There are a million reasons why productivity can increase or decrease. The quality of high school education probably has little to do with it. And if you read Robert Gordon, you would know that productivity has gone down over the last 40 years, which doesn’t help your case. I don’t really have as strong of an opinion on high schools being dumbed down as you think I do. My big problem was with you claiming the link between high school graduation rates and productivity, which I believe is just ridiculous. You completely ignored my point about signaling, which I believe is why there is a disparity between graduates and non-graduates and you still haven’t given any evidence linking high school graduation rates or test scores to productivity. Don’t think that just because I’m biased that you aren’t too.
One correction: Robert Gordon. He says the growth rate has slowed, not that productivity declined. He says that in the paper I pointed to in my last reply, and in more recent papers like this one: http://www.nber.org/papers/w19895.
“Stagnates at a plateau” some time after 2007 is most definitely not “declines between 1970 and 2010.”
Yes, I know I’m biased. I was quite proud of myself for posting a “libertarian” analysis, blaming government intervention for a problem that my bleeding liberal heart would otherwise love to throw more tax dollars at.
You could make the case that product safety testing is a natural monopoly, and a public good.
Consider, there is little benefit of two companies running the same quality tests/checks on the same design/process. Furthermore, very few end customers have the required time, experience, money, or incentive to go through multiple quality check agencies to determine which of them are “trustworthy”. It would be very easy for ShoddyHelmets ltd to establish the shell company TheseHelmetsAreTottalySafeTrustMeOnThis Co to give them solid reviews. Yes, a lot of people would catch on, but by the time that has happened lots of bad helmets have been sold to the people that have not caught on.
Add that to the fact that it is a lot easier to punish the makers of dangerously fraudulent products if the testing agency has the force of the law behind them and you can start to see why government safety checking is more widely used then private.
I will be the first to point out that the FDA and other protection agencies are not perfect. I just want make sure we all understand the cost of doing things the alternate way before we switch over.
I don’t think that government safety checking actually happens all that often. Usually building codes will require certification from a private testing firm; Underwriter’s Laboratories is common for fire doors, for example.
I do not see how this comment is relevant to MathEpic’s analysis? Governments may well outsource certain parts of a safety checking programme to specific companies, but this would still be a process of government requiring safety checks in order to respond to the potential market failures MathEpic identifies. In any case, the amount of safety checking done by “government”, by which you seem to mean by a directly-paid government employee and not a contractor, depends on the government and the kind of safety check in question.
Actually, I think may agree with CatCube’s argument.
If I understand correctly, he is making the point that actual safety checks are not a significant cost in the grand scheme of things.
I will concede that point, and in my previous comment I mentioned the savings of those redundant checks as a benefit of having a government monopoly. I realize now that this aspect of my model is flawed.
I will however stand by my assertion that having multiple standard setting bodies would be confusing to the average customer, and ultimately detrimental to the safety of the consumer. The parts of my argument that I think are important are unchallenged.
No, I actually think that the safety checks can be a major cost driver–because they actually can be really expensive. What I was quibbling with was merely that you couldn’t have private companies set safety standards, since the vast majority of safety standards are set by private industry–the name “Underwriter’s Laboratory” came from the fact that insurance underwriters founded a laboratory to test consumer appliances for fire safety. The government often tries to use already-existing standards instead of writing our own.
Most building codes, for example, are written by industry and enacted into law.
I agree.
Where could all the money be going?
It’s so mysterious! What a mystery.
http://1.bp.blogspot.com/-MBqvW8N-sOE/TbPWVbgSIdI/AAAAAAAABtU/OPwd6UBT-Ck/s1600/ceo-ratio.png
http://www.motherjones.com/files/images/blog_executive_comp_wapo_0.jpg
I guess we’ll never know. It’s just so puzzling.
You’re being silly.
Suppose we made a similar chart looking at top entertainers – people like Colbert and Springsteen – and looked at that group’s salaries as a multiple what the average singer or actor makes. We’d see the exact same trend for the exact same reason. Colbert is playing to a vast broadcast audience and Springsteen is selling out stadium shows; in the 1980s the infrastructure wasn’t there to let top performers be as productive as they are today so they couldn’t get paid as much. Similarly, the heads of huge companies like Apple or Walmart are running vastly larger and more complex companies than they would have been 30 years ago. The top few hundred superstars in any industry get paid more today than in the past in large part because they are creating more value via work that is more highly leveraged than before.
(And as I mentioned above: the median CEO salary is $170k.)
yeah, public school CEOs really get overpaid. Thanks, Obama!
The chancellor of the NYC public schools, Carmen Fariña, makes about $400,000 a year – high for a civil servant but not much for a CEO of a “business” with over 100,000 employees. (And almost half of that figure is her pension – she had retired in 2006 after a 40-year career in the city schools and stayed retired until she was appointed chancellor in 2014.)
In the 1980s, infrastructure pretty much guaranteed the top performers 1/3 of the prime-time television audience, and the prime-time television audience consisted of almost literally everyone who didn’t have the energy and opportunity for some more active pursuit after a day’s work plus dinner, etc. Today, the prime-time television audience is diluted by the internet, and divided among a dozen or so high-profile channels.
Likewise, in 1980 we had the infrastructure to deliver first-run movies in theaters to essentially the entire population, and that population had little alternative if they were in the mood for new long-form audiovisual entertainment.
That’s true, but keep in mind that in the modern era, the market for American culture is now global rather than national. The downsides of competition for attention have been more than made up for by the increase in potential customer base and the rapidly rising wealth outside of the traditional Western first-world nations. Disney, just to pick an example, is doing better than it ever has because of its massive success in building a truly global brand.
Good point. Hmm, that suggests there should be a significant difference in the revenue trends over the past few decades for entertainment that doesn’t translate as well to the global market. American-rules football, black comedians, country-western music, etc.
Not quite true – it wasn’t until a bit later on in the 80s that duplication technology made it viable to saturate theatres with a single particular movie, and reduced competition due to the Reagan deregulation (specifically, letting studios also own theatres) gobbled up the small players in the market.
In 1980, theatres as a whole were far less homogenous than they are today. You might be able to see Empire Strikes Back in any city in America, but there’d also be dozens of other theatres (hundreds in the major cities) showing everything from old classics to the current revivals (back in the day, a movie might be re-released in theatres every five or ten years or so – Disney was especially well-known for this) to foreign/speciality films to the unabashed big-movie ripoffs for theatres that couldn’t acquire a print of the blockbuster of the moment to barely-remembered movies that some theatre-owner thought would turn a profit.
The TV landscape was definitely more homogenous. Movies, not so much – at least not for the first half of the 80s.
Man, that stirred memories I haven’t thought of in decades: seeing Snow White in a theater. That must have been sometime in the 1980s, but I couldn’t tell you when. Ah, nostalgia.
That”s probably less than a percent of the story.
Note that schools don”t even have CEOs and that sector is a huge part of cost disease.
See, this is the problem with the scapegoat mechanism. What if you sacrifice the scapegoat and the problem doesn’t disappear?
1) Cute bullshit like having your chart run off the axis should instantly make you distrust the source; they’re pushing an agenda, not trying to display data for analysis.
2) The ratio has increased, and there’s a fairness argument to be made there, but that’s entirely about the emotional response of the employees. It has zero bearing on their physical well-being. For example (numbers from a quick Google search) the salary of Walmart’s CEO is $35 million/year, with 2.3 million employees worldwide. If you cut his salary to zero, that represents an additional $15/year per employee.
Tha analysis of Baumol’s cost disease is off. Cost disease can explain this without needing the wages of the teachers/doctors/sub-way workers to go up faster than wages in other sectors.
Let’s consider a ridiculously simplified example of an economy with only barbers and bakers. A barber can effortlessly become a baker, and vice-a-versa. People don’t care which profession they are in except for the pay. People consume haircuts and bread.
In the olden days it took one hour to cut someones hair and one hour to bake a loaf of bread. But then there is a technological revolution in baking and the baker can now bake 10 loaves of bread in an hour. Both before and after the baking revolution the baker and the barber have to be paid the same – otherwise people switch. Which means that, supposing they keep all their profits, a barber must now earn enough to buy 10 loaves of bread. The barber costs 10 times as much for producing the same good as before, whereas the baker costs 10 times as much as before, but the baker produces 10 times as much. This gives increasing cost of barbering without increasing relative wages.
Now, if you also want barbering to end up being a larger share of spending we can get this as long as people don’t want to spend all their extra income on the bread. For simplicity, suppose at the old prices and incomes a third of people chose to be barbers and two thirds to be bakers bakers. Then after the baking revolution they use their new found wealth (at price of 1 haircut=10 loaves) to demand 5 times as much bread and merely double their demand for haircuts. Well that would actually double the number of barbers and halve the number of bakers – meaning that as a proportion of the economy barbering had moved from one third of the economy to two thirds. Of course, this would suggest that there was more of these things being consumed – but I think that is probably true of the industries that Scott discusses.
So the fact that the wages of teachers, doctors and so forth have not been increasing at a faster rate than comparable professions is totally compatible with the Baumol cost disease explanation. Note, you can add a lot of complications in here but the basic result still goes through.
Of course, none of this is to suggest that the other issues Scott raises are not also driving up costs or preventing efficiency increases – I think the direct evidence he provides is very compelling. But it is the case that you could explain these cost increases via higher efficiency growth in other sectors without seeing an increase in the relatives wages of these sectors.
I think I’m failing to understand something. Why are costs not exploding in other sectors in which productivity has not improved? For instance bakers are no more efficient than they were 50 years ago, yet a loaf of bread doesn’t cost 10x as much as it used to adjusted for inflation.
I think you’re overestimating the number of people that get their bread from artisanal bakers. Most of it’s mass-produced and would capture efficiency gains; the stuff that isn’t is essentially a luxury good.
Bread was a probably a poor example. Choose any industry that is not tech and produces something tangible – lets say haircutting. Productivity gains have been negligible but the costs have not exploded in the way that healthcare and education have.
Productivity gains have taken place lower in the value chain though, right?
If the process of harvesting grain is more efficient (which I assume it is) that would result in lower prices for bread, even if the baking process remains constant.
How about haircuts? Very little increase in productivity, but little increase in cost.
Haircuts cost way more than they used to. The real wage of a hairdresser in 1900 was a fraction of the real wage of a hairdresser now – despite the fact that they do essentially the same job. Note, this is real wage not relative wage – relative wage should stay the same (kind of) as jobs with comparable requirements and pleasantness.
Now, we don’t have people worrying about the haircut cost disease because it was never a big part of expenditure in the first place. It would be good to know if proportion of gdp spent on haircuts has gone up or down – I suspect up, but that is not necessarily implied by the theory.
I said 50 years ago not 1900’s. Haircuts do not cost much more per BLS data. This is same period in which healthcare and education costs skyrocketed.
https://www.bls.gov/opub/mlr/2014/article/one-hundred-years-of-price-change-the-consumer-price-index-and-the-american-inflation-experience-10.htm
https://data.bls.gov/cgi-bin/cpicalc.pl?cost1=1.42&year1=1955&year2=2016
Ah sorry, 50 years – my bad.
So according to a quick google an average haircut now costs $28. The link you gave says the average haircut used to cost $12.72 in real terms. So that’s more than a doubling – a large increase in cost with no/little productivity increase.
None the less, while n line with Baumol cost disease predictions, it has increased at a slower rate than those other areas. Which begs the question as to why?
One obvious (certainly not only) reason is that hairdressers and doctors/teachers/etc have different skill/capacity sets and hence different outside options to teachers and doctors. Indeed, doctors and teachers have been paid equal to or more than the mean worker for all that time. Whereas hairdressers are paid well below the median now, and I would guess were below the median 50 years ago too.
Over the last 50 years we have seen mean earnings grow at a much faster rate than earnings in the bottom 50 percent – indeed some argue that the median has totally stagnated. This would imply a more rapid increase in the costs of those goods relying on labour which is compensated at the mean or higher (mean>median for most income distributions).
Anyway, I am not trying to say that real wages is the only thing that explains this. All I am saying is Baumol’s cost disease is totally compatible with flat relative wage profiles of teachers and education. Indeed, as teacher wages have increased by 50% and the per-student cost has increased by 250% there is pretty much no way it can all be explained by Baumol’s cost disease. Likewise for medicine and elsewhere.
I don’t know about other things, but with haircuts the basic “just a haircut, like in the 1960s, but with today’s styles” is still available for $12. You can see Supercuts advertise on TV.
Lots of upper-middle-class people consider themselves too good to use Supercuts, but, at least for this industry, the “just make it normal” service is well available to anyone who wants it.
I did a double-take at the average haircut price being $28, but it makes sense once you realize that that’s the average for both genders. Men’s haircuts are indeed fairly cheap (though they’ve still gone up considerably; when I was a kid, my dad and I would go to the barbershop and spend $10 for haircuts for both of us). But women’s haircuts tend to be much more complex and labor intensive; my wife has thick shoulder-length hair and it costs her about $50 plus tip every time she goes to get it cut. (And she’s not going to super-expensive trendy salons; this is fairly barebones.)
No. That’s the average haircut for men. The average price is almost always higher than the cheapest price 😉
In all of these problem sectors it seems the resources consumed in each industry has shifted to servicing and extending the definition of the marginal ‘customer’. This can explain I think some of the above
E.g.. 40 years ago hospitals received 100 customers. Ranked, patients 1-20 died. And no one really tried to save them (some comfort but that was it). Today they are trying (are obliged) to try to save patients no 5-15 (the 85 year old with triple bypass, 20 week premie). The total no of staff needed for this task swamps increases in individual productivity. You just need more people, even if they each are more productive or trained than in the pasts. So salaries for each does not go up that much, there are just more of them, total cost go up, and outcomes over the patients treated are somewhat but not much better (some now make it but some fraction still die). Hence medical curve shows some improvement but not 1:1 with cost.
In education, in the 1950-1970s we could afford to socially promote non-academically inclined students, not really expend effort on them as long as they kept quiet in class, then have them leave at age 16 to go work at Ford. Universities could count on getting the higher performing students. Today, we have to deliver much weaker students all the way to the end of high school, also force many into college. And ALL the extra resources go to get this new lower end close to what used to be the minimal university student performance. The top cohort gets little extra resources and has not really improved. Hence, the scores across the new ‘extended’ student population stays flat.
I base this partly on what I have seen from my wife (engineering professor at top university), resources are heavily consumed by the lower performing students, top students have better opportunities than 20 years ago but in general the resources are much less focused on them than on the marginal students.
So if you assume these industries for whatever reason shifted focus to servicing deeper into the tail of the population aptitude/effort over the years (I am not saying this is good/bad, was for social reasons, for humanistic reasons or making any comment), this would very much explain the overall cost rise, coupled with the lack of desired improvement in statistics measured across the population that now gets services as a whole.
In short, in the US we define policies that drive costs based on the tail of the population, but we experience performance on the average. As an immigrant from a third world country I think this is a big difference often invisible to the US-born citizens I talk to. Maybe why this is a great country and I am here. All I can say is that it is a world view that is not common world wide. Where I grew up, No Child Left Behind law would have been designed as 1 Child Left Behind. There just were not the resources, but more important, it was just more socially acceptable to just halve the no of slots halfway through an academic program, for example.
So I guess the question is why are we so focused on pushing services into the tails and will we continue to do so? Does society really benefit from having a larger fraction of the population capable of doing crappy algebra? Clearly there will be some point where the cost becomes prohibitive and it will stop: maybe that is what we are seeing now. But it is stunning that this was a 50 year process — if the dynamics in social policy “markets” are that slow it is going to be really difficult to manage.
This is really excellent commentary and definitely something that as a society, we don’t like to talk about. The idea that sooner or later, we have to declare some people “unsalvageable” because of the cost of servicing them is cultural anathema in America.
I would suggest it’s primarily a consequence of having a few generations in which the educated upper-middle class (who are primarily the ones debating public policy) have never experienced resource scarcity. Spend some time in China, where mass famines happened in living memory, and you see a much more hard-nosed and realistic approach. There’s certainly no “everyone can have everything they want whether or not they’re willing or able to pay for it” idealism there.
Alternate theory for the case of health: it is a consequence of the power of a private insurance lobby. The UK and Canada fit your criterion (low scarcity) and yet we all rolled our eyes when “death panels” became a talking point, as though you could run a healthcare system that funded everything. There is an incentive for health insurance companies to produce anti-public-healthcare feelings, and a strategy for this can be to focus on the tail where it doesn’t perform.
I’m curious, how well understood is this really in the UK? I recall a story Megan McArdle told of being interviewed on British radio:
The idea that people under national health insurance schemes think of them in fuzzy altruistic terms seems right to me, though it is hard to evaluate McArdle’s story without knowing what the two people she was talking to thought was the case. Every health official must know that not everything is funded or can be under the NHS, and they must know that people make decisions about it. This makes me think there was some difference of emphasis or miscommunication not conveyed in the story. To criticize myself though, a lot of USA people rolled their eyes at “death panels” too. Perhaps the lesson though is that similar popular sentiment exists in all three countries against the idea of people making these decisions, but in the USA that sentiment was successfully mobilized to oppose a public system.
The figure for £20,000 a year, as quite clearly explained on that web page, is for _public health interventions_ ; helping the anonymous statistical person who avoids cancer if a GP sends an extra minute lecturing them against smoking.
The figure used as a baseline in pricing negotiations with pharmaceutical companies alone is 4 x that, and drugs are only one component of costs.
The idea that spending on an individual patient stops at anywhere like £20,000 is so obviously false to anyone living in the UK it’s unsurprising the panel reacted the way they did.
I’m taking this to mean I successfully read between the lines in a one-sided story, and now I am publicly patting myself on the back.
The claim was not £20,000 but £20,000 per QALY. Is that mistaken?
@1soru1: From NICE’s own website, written by Sir Andrew Dillon, the chief executive of NICE:
What website were you referring to? The link McArdle provides in her article is broken, presumably due to a copypasta error.
There is no meaningful budget for spending on an individual patient, no point at which anyone says ‘your account is out of money, go away and die please’.
What there is is a budget, influenced by NICE’s QALY numbers, for building the hospital and buying the machinery and hiring the staff that must be in place before anyone can get treated. If a hospital hasn’t been built, or a specialist not been hired, you can’t use them. If a patient doesn’t get a too-expensive drug, it is not because the budget isn’t there, but the because the drug isn’t. It’s not stocked in the hotel pharmacy; the only way a doctor would physically be able to get hold of it is by ordering it online and charging it to their personal credit card.
The result is that with _x_ scanning machines and _y_ average demand you get a scan after a wait of _z_ days. And the theory is to spend whatever money is available so that the cases where ‘z’ is after you are dead is statistically minimized compared to other ways of spending the money.
Of course, reality it is more complicated than that, with small-scale health care charities wanting to claim a concrete result like and extra scanning machine, political campaigns to keep local hospitals even in cases the statisticians say a central one would be better, and so on.
But arguably the real problem with this approach is that, to the extent it successfully minimizes the pain of a low budget, it leads to a budget lower than it otherwise would be.
You are misreading the claim you object to. It was:
That’s not a statement about a budget limit on how much will be spent on a single patient. It’s a statement about what the return in QALY has to be per pound of expenditure in order for NICE to recommend a treatment.
No, by itself it’s a statement that doesn’t make it clear as to what it is talking about (and also says 20K per year, not per QALY, which understates the numbers involved in curative treatment by a factor of 30 or so).
You and I may both think the two cases are different, but Megan apparently doesn’t:
‘”If you design a formula to deny granny a pacemaker, knowing that this is the intent of the formula, then you’ve killed granny just as surely as if you’d ordered the doctor to do it directly,”
https://www.theatlantic.com/business/archive/2009/08/rationing-by-any-other-name/23049/
Of course, if you asked an american doctor ‘how many people did you murder this week?’, they would not answer with the number of people they did not admit to a course of non-emergency treatment they had no funds or insurance for. And if you carried on insisting they had actively killed some fraction of those people, they would very likely disagree angrily.
@1soru1: I think you either did not read what McArdle wrote, or you are deliberately changing the subject to avoid admission of error. She was not talking about NICE budgeting treatments for individuals; she was talking about NICE budgeting treatments that will be available in the NHS as a whole:
This is in fact precisely what happens: general medical care in the UK is good, but the NHS will not cover some “heroic” procedures and medications that are available in the US because the cost-per-QALY is too low. This is not a decision made on an individual basis, and nowhere does McArdle even imply that it is. She leads off her quote by explaining that NICE makes decisions on a system-wide basis by issuing treatment guidelines for the whole NHS.
Saying that she is dishonestly talking about treatment “per year” when she has just specified and defined QALY in the previous sentence is only possible with a viciously uncharitable reading that rests entirely on the assumption that she is attempting to confuse the reader with a totally different topic instead of using shorthand for the term she just defined.
The argument that there is a meaningful difference between a program that acts systemically with known-in-advance deleterious consequences for individuals and a program that targets individuals directly for those consequences is laughable, and not something I suspect you would accept in any other context. Or are you declaring your surrender, for all time, of any and all arguments that policies which eliminate welfare measures and subsidies are “targeting the poor”? Or that racially-biased policing policies “target minorities”?
First, I would suggest that there is a significant culpability difference between a doctor recommending treatments between options the medical system is equipped to provide, and the bureaucrat designing the program which determines what facilities the medical system will equip. A doctor who says, “treatment X could save you, but no hospitals in the country have the equipment and training to perform it” is not denying you treatment X. The bureaucrat who says, “treatment X is too expensive, therefore we will prohibit hospitals from purchasing the equipment and training staff to perform it” has a much stronger potential liability for making the decision, if anyone — but of course he’s merely operating within the constraints of the system someone else has imposed on him as well.
Second, I think you badly misunderstand American medical care; our system will tend to present options in the form of, “here are the things your insurance will pay for. If you want to spend your own money, here are additional options it does not pay for but that we can refer to you to a specialist to perform.” The only time you hit “here are options that exist, but which you cannot obtain” are when the treatments are unapproved by the FDA and therefore not legally available in the country. (Usually, these are experimental treatments that have to be performed in countries with weaker medical liability because the lethality risk is too high.)
McArdle’s entire point of comparison is specifically that the American medical system covers a much higher and more expensive number of medical treatments than the NHS because we don’t have a NICE that is measuring their effectiveness by QALY. And she’s arguing that this is a bad thing. But also that many Britons either misunderstand or willfully deny the implications of what limiting those treatments would mean.
> She was not talking about NICE budgeting treatments for individuals
That a pure assumption on your part; you can charitably read that into it, but nothing in _her report of her words_ [1] contradicts the other reading, and the reaction the experts hearing it had supports it.
Basic theory-of-mind; everyone involved in a story has to have an internal viewpoint from which their actions make sense. If the experts disagreed with what Megan claimed to have said, what would they have claimed instead as the truth of the situation? That NICE doesn’t exist, that budgets are infinite?
[1] If you can dig out an actual transcript, feel free to post it.
> rests entirely on the assumption that she is attempting to confuse the reader
Yes, that is the case. Do you really disagree? Why else go to the trouble of defining a term and then explicitly _not_ use it?
if I carefully explain to you what a furlong is, and then say ‘the racecourse is 4 miles long’, how long would you think it is?
> a meaningful difference between a program that acts systemically with known-in-advance deleterious consequences for individuals and a program that targets individuals directly for those consequences
No-one was ever jailed as a serial killer for not buying enough mosquito nets. Pretty sure that is a general societal consensus, and one I agree with. If you disagree, you probably need a stronger argument than simply stating the idea is laughable.
Similarly, ‘targeting minorities’ is different from ‘shooting a specific minority’ which is different from failing to spend enough to prevent a generic unspecified person suffering some bad end.
> If you want to spend your own money, here are additional options it does not pay for but that we can refer to you to a specialist to perform.
The same is of course the case in the UK; is that something you thought was otherwise[1]? If so, I’m having a theory-of-mind failure with respect to you; how could you _possibly_ think that was true?
Admittedly, what they generally won’t do is allow you to buy your own drugs off the internet and ask them to put them into their IV drip machine. Instead, you probably need to move hospitals to one that supports the treatment, has done the training and has it in stock. And if that hospital is private, the beds and nurses and cleaning don’t come for free.
If nothing else (i.e. the market won’t support facilities for that treatment), air travel exists.
But that is not what happened here. “mile” and “furlong” are two completely different terms. What she did was equivalent to explaining what a “nautical mile” is and then saying “so the Strait of Dover is 18 miles wide”. Yes, it’s shorthand. It is not the use of a completely different word.
I am well aware that the technical language allows the possibility of the interpretation you supplied when choosing to define her use of “per year” as “per calendar year of treatment” instead of the “per quality-adjusted life year” she had just mentioned. I am accusing you of being deliberately uncharitable, bordering on obtuse to reach that interpretation by stripping it of the context of the previous sentence.
I don’t think we’re going to agree on this point, and without a transcript, it’s pointless to argue the semantics any further.
No, I’m aware the UK has a parallel private healthcare system that works almost completely separately from the public one, but my understanding (perhaps incorrect) is also that NHS doctors will not usually suggest treatments that NHS cannot provide, or if they do so, it’s more in the “psst, hey, just so you know…” unofficial capacity. In the US medical system, you will usually get a full range of options from your medical provider because the provider is not paying for it and has no reason to disincentivize you from seeking whatever care is available.
What I was addressing was what I thought (apparently incorrectly) that you were implying: that in America, if your insurance company doesn’t fully cover the cost of a treatment, you’re simply out of luck. The reality of the situation is far more complex than that; many insurers will partially-cover experimental treatments, or perhaps not cover the treatment itself but cover many of the related expenses such as the hospital stay.
Generally speaking, American HMOs such as Kaiser Permanente are closer to the NHS model of having a fixed menu of treatments that they will cover and telling you “you’re on your own” if you need something that they don’t provide. But even they usually have some provision for reimbursement of expenses incurred outside of their network, though typically at a relatively low rate.
About a third of the insured individuals in the US use this model; the rest typically use a PPO plan, which offers a sliding scale of reimbursement based on whether you choose to use in-network healthcare providers for negotiated services or outside healthcare providers. PPOs are typically much more flexible about the types of treatments that can be provided than HMOs; in most cases, as long as a doctor recommends it, they will cover it. For obvious reasons, most people with chronic medical conditions tend to prefer the PPO model even though the reimbursement rates tend to be lower.
So, I really don’t have a dog in this fight, but this testament is highly misleading:
Now, maybe she simply is communicating poorly, but absent a very close reading, that looks like she is saying that any treatment more than $30K in a given year is likely to be rejected.
Sure, it’s probably 30K per QALY, but that’s not how the statement scans. She said have said, at the very least, per year saved. Even better, per QALY saved.
HBC, where “a close reading” means reading the preceding sentence?
You are leaving out the preceding bit which explicitly referred to cost per QALY year. Only a very careless, or deliberately hostile, reading could miss that.
Sure, if you take that sentence out of context and don’t read the previous one, I agree, it absolutely sounds like she’s making the claim that the system caps at spending 30k pounds per year on a treatment, because the common definition of “year” without a qualifier will be “calendar year”.
But when she’s just said that they measure treatments in QALYs in the previous sentence, the word “year” should reasonably be interpreted to be the “years” she just said they measured, not a different definition of “year”.
Do you want to do what Scott did on the NYT piece where you have some trial readers read the article and see how many come up with your interpretation?
The fact that you have a correct model of the relationship between years and QALYs in your head does not mean that the words as written imply that model. ‘If you already know the right answer, you can change the words as written so it doesn’t contradict it’ is a pretty low bar for explanatory writing.
What’s more, the fact that she was unable to understand why experts were disagreeing with her strongly suggests that Megan doesn’t have such a model. Maybe she thought QALYs were a slightly adjusted year, like a nautical mile, not something well over an order of magnitude away from calendar years.
Thinking about it, that’s a pretty good rule of thumb when dealing with experts in general; if you disagree with them, fine. If you _can’t understand why they are disagreeing with you_, you probably have a bot more study to do…
You’d want to find a test group consisting of policy wonks for whom QALY isn’t MEGO terminology, of course, as the point of comparison is a British government health official chosen to publicly debate health care policy with an economist.
As somebody who’s been reading McArdle for over 15 years at this point, I can assure you she’s well aware of the definition of a Quality-Adjusted Life Year.
I may have actually learned about it reading her columns, now that I think of it.
There are two separate issues being conflated here. One is what happened on the radio programme. We have only McArdle’s report. I agree with those above who have said probably the people she was talking to weren’t as stupid as she makes them out, that this was a misunderstanding, but we don’t know. Then there is McArdle’s article. Is it misleading? We might guess it reproduces whatever misunderstanding happened on the radio programme, but it seems at least as likely that, having thought through the matter while writing it up, McArdle would not reproduce the misunderstanding. 1soru1 is arguing that there is an ambiguity in the article and that perhaps the same ambiguity is why the panellists disagreed with her. Everybody else does not agree that it is a real ambiguity. A poll of readers of Bloomberg could tell us whether most people would find the sentence misleading, or we could poll policy wonks as John Schilling suggests, but neither would give us direct access to the first question.
That price doesn’t matter.
I know lots of people, who I talk to with my own face in the real world, who think that the medical system would work fine if it was “just doctors and their patients” deciding what to do. And doctors won’t recommend things that are “too expensive,” for a bunch of reasons rolled into that last part.
I don’t think McArdle was making the health official out to be stupid; I think McArdle was bending over backward to avoid making the health official out to be a liar. An inconceivable possibility, I realize, when a government official is on the air defending his own department.
In her actual words, she is using it in the sentence immediately before the sentence you are complaining about:
Suppose she had said “NICE has a policy for cancer patients. If a patient is older than fifty …”
Would you read the second sentence as being about patients in general or about cancer patients?
If the individual, or the group of individuals, are named in the intended rule, then the rule can be said to target them. If they aren’t, it isn’t. That’s like saying that knowing that my mother isn’t poor, giving a gift to my mother (and having some poor person die of malaria when I could have bought malaria nets) is targeting the poor.
That makes no sense. If you read the quoted text, she was talking about the cost in money of a benefit in additional years of life, not a rate of expenditure.
Not a bad rule of thumb in lots of contexts. Just yesterday I was thinking of an obvious improvement that should be made in a product I was using–then looked at the product more carefully and discovered that they had already made a slightly better version of my proposed improvement.
But in this case your explanation requires us to assume that the expert was himself so unfamiliar with the standard terminology in his field (QALY) that he interpreted someone else using it correctly as saying something entirely different.
I think there are more plausible explanations.
> interpreted someone else using it correctly as saying something entirely different.
For her to use the term correctly, she would have had to use it, no? In your model of what she is saying she is using it; in her actual words she is not.
It seems pretty clear to me that McArdle either mispoke, or was misinterpreted. For example, she said ‘year’ meaning ‘QALY’, the experts heard ‘year’ meaning ‘year’. Or ‘treatment’ meaning ‘investing in the infrastructure required to provide a kind of treatment’ and the experts heard ‘treatment’. Maybe she was just unlucky and ended up using a form of words also used by those with a common misconception about health care.
Note that, as a basic factual matter, the actual limit for a drug-based cure of current patients, as opposed to public health investment, is actually along the lines of £200,000 to £300,000 per QALY.
https://www.england.nhs.uk/wp-content/uploads/2013/04/a01-p-b.pdf
At the discounted price, the range of ICERs for ivacaftor is £285,000 per QALY
(optimistic scenario) to £1.077M per QALY. The ICER for the optimistic
scenario falls within the range observed by NICE for other ultra-orphan
medicines.
I’m not clear what your ‘more plausible explanations’ are; they all had a stroke and forgot everything they knew about health economics?
@1soru1
Your quote from the NHS is missing an ellipsis. You forgot to include this part:
The NHS appears to be making an exception for CF based on its rarity:
And, having read that, you are still not getting the fact that ‘target cost-effectiveness’ and ‘payment limit’ are not the same?
The NHS is, in actual fact, paying £300,000 per QALY for a single drug, which will be only part of a patient’s treatment. And that’s not an exception; the whole point of the paper is that it is no more than what is already being paid for comparable drugs (those that treat rare diseases well).
If you have a theory about how things work, you need to adjust that theory to the facts, not vice-versa.
Excellent post.
One commonality in the examples cited is disintermediation/subsidies. College is paid for by a third party, and financed by generous government loans. Generous in the sense that they are easy to get, not easy to get out of. Health care has massive tax subsidies, and for a good period of time felt “free” to employees. Public schooling is paid for indirectly.
Regarding the section on risk aversion, I happen to be in the playground business. The most common injury is broken bones from a fall. Consequently, our industry has ended up with poured in place surfacing, which costs 10x as much as mulch or pea gravel. It is wonderful stuff, but really increases the cost of the playground. Again, no one pays directly for their playground, and the paying party cannot risk not being in tune with the regulations.
Markets cannot function if the risk reward relationship is not direct.
It seems like a phenomenon without a clear central cause.
There’s a bunch of big things, mostly relating to misregulation – for example the non-bankruptable student debt system that creates the incentive for colleges to raise prices ad infinitum because banks will gladly give loans that are near-certain to be repaid eventually, or the American lawsuit system which makes posterior-fortification a highly prudent occupation seeing as the alternative is to lose all your money, or the regulations on medication and patents which decrease supply and inflate price of medicine, etc, etc. But overall, it seems a complex interaction of incredibly many small things, mostly related to everything being interdependent on a huge amount of external moving parts (necessitating hoop-jumping), and the growth of inefficiency that simply follows the growth of the central bureaucratic apparatus and the amount of regulations it produces (some of which isn’t even bad for its amount, but for its direct effects).
Jim seems largely in agreement!
I have an anecdote which may describe some of the ‘it’s more expensive but it doesn’t show up as a line-item’ stuff you’re mentioning here.
The Department of Veterans’ Affairs processes a lot of claims. I’ve seen some of the claim files; they can take up several linear feet of shelf space. I’m told that, by law, case reviewers must review the entire file before making a decision on a benefits claim; to review only the updates since the last decision is called “top-sheeting”. The VA has also been under fire for having an ever-growing claims backlog.
I imagine that the no-top-sheeting rule came about because a claims adjuster made a mistake, something tragic happened, and a very concerned Senator passed a law to make sure it could never happen again. And now we’re spending way more time and money on processing claims, for a small marginal benefit, just like knocking down an accident rate another order of magnitude at a terribly high cost.
The pattern looks like this: an organization that tried to do something like this would have failed without the extra resources available. But they don’t now. So they expand to utilize them, and our surplus is eaten. I guess figuring out how to fix this is the problem.
I try to avoid making an ask that I can’t do myself, especially when it’s a modest ask, but I’m not up for a research project these days. So here’s a request for future discussion:
Can we add simultaneous statistics for the changing # of students moving through college / # of adults taking healthcare benefits / # of subway miles laid / # of houses built / # of doctors, nurses, and professors moving through the system?
It’s hard to get a clear understanding of the economic flows when we’re just looking at the one dimension (price) shown on these graphs. We really need the other dimension (quantity). I think Scott Sumner warns against “reasoning [just] from a price change.”
Like … I’m guessing that in re: college we’d see a big increase in # of students moving through the system over the same time period. But maybe not. The former statistic would suggest a supply side issue—there’s some huge barrier to opening a new university—and that we should aim our solutions at that issue. The latter statistic would suggest … that something really weird is going on.
In addition to this, it would be useful to look at the costs of things relative to other things. Cost in inflation-adjusted-dollars already is a cost “relative to other things”, except it’s an opaque grab-bag of index commodities. Comparing the relative costs of specific things versus other specific things, with an eye to “normalizing out” specific factors such as what would be effected by
– technological growth
– globalization
– population growth
– “wealth inequality”
– etc.
might give a cleaner picture of where costs are actually increasing. Or maybe more appropriately, what stuff is more expensive in some more meangingful global sense.
Regarding education costs, a couple of things that have changed since the 1970s are
1. A proliferation of staffers relative to teachers, both in schools and at the district level. Colleges now have an impressive number of psychological counselors, diversity consultants, and admissions staffs seem much larger.
2. The physical plant is much nicer. Most Baby Boomers in Southern California, for example, went to school in shacks, typically without air conditioning. In this century, however, it’s been common for new public schools to cost up to $578 million, in the case of the Robert Kennedy school on Wilshire Blvd. Similarly, college buildings erected in the last 30 years are much nicer than the ones erected in the Postwar modernist/brutalist era.
>Most Baby Boomers in Southern California, for example, went to school in shacks, typically without air conditioning.
I recently looked at the Boomer elementary school I attended in the 60’s on google maps. The bungalows, essentially double wide trailers, installed in the sixties to deal with the extra students, are still there! (Cubberley Elementary, San Diego). Here in Colorado the schools look like high end club houses for a private golf resort. I am thinking not many parents would tolerate their kids attending class in a trailer. Another factor, different from the boomer era, is size. It seems to me that schools physically are much larger and the number of students attending seem to be much larger than 40 years ago. I wonder if there are statistics that that show a cost correllation with those two factors?
With respect to the general discussion of the cost vs. outcome, it seems like we are stuck on a path of not changing the approach in any meaningful way. Keep doing essentially the same thing forever and hope for a better outcome. In the case of health care – make it worse by adding a layer of government intervention. Instead of just taking all the money and implementing a single payer system [and shut down insurance companies], we will have state and federal “help.” I am not oppossed to a government health care mandate, but I am at a loss at how we could reduce the cost of the system by adding additional layers of “management” and “oversight.”
The bungalows might just be a California thing. Californians seem to be pretty accepting of shacks, as protecting from the weather is usually not as urgent for them as it is in some places (and Colorado is indeed an example of a place prone to troublesome weather you wouldn’t encounter in California).
Without question, across the board. I think there’s tons of clear evidence for what Scott is talking about, but at the same time I feel like we kind of ignored the fact that everyone just straight up learns more in school now, in a way not well reflected in test scores. If I received my parents’ education, I would be totally unqualified to do anything, because I would have a 70’s understanding of math, science, and technology. There’s an unspoken assumption in the educational aspect of this analysis: that teaching people until they reach the level of education necessary to function should require the same amount of effort now as it did in the 50’s. But I think that might not be true, because, at least in technical fields, you simply have to learn more to be competitive. Oscilloscopes, computer clusters, PCR, DNA sequencing… All didn’t even exist when my dad went to college. All necessary and valuable parts of my education. All very expensive.
My impression of my time at the prison called the educational system was that they are on a trend of dumbing shit down, because the intake of students can’t handle it. The way the elder professors told it, we learned and were held up to a standard much inferior to that of the previous generations.
Seems exaggerated, unless you’re literally Elon Musk, in which case I apologize, Mr. Musk. If I received my parents’ education, I’d be somewhat worse off on computer skills (but still would have learned the same basic electronics), but that’s about it.
Regardless, the point of (non-vocational) education isn’t to cram your head full of stuff, or to give you specific skills. It’s to:
a) point out exceptionally capable units from the population,
b) give average individuals the basic capability to earn a taxable income,
c) teach the more exceptional people how to assimilate knowledge quickly.
(This list is not exhaustive, but the other purposes are irrelevant here.)
I’d wager that if you took an engineering graduate from the 40s, handed him a modern programming textbook, and told him to learn that because he was starting next week, you would get superior results than if you took an engineering graduate from the 00s and did the same with a 40s era electronic computing textbook. The 40s guy would probably be amazed at how simple and straightforward things now are, and the 00s guy would have trouble coping with how much he would have to do by hand.
@carsonmcnell – “If I received my parents’ education, I would be totally unqualified to do anything, because I would have a 70’s understanding of math, science, and technology.”
I was homeschooled from 4th grade on, and not very attentively either. My parents made some moderate effort to catch me up on math for the ACT, but for the most part I remember just reading, playing video games, practicing writing and drawing, making knives, and generally doing as I pleased. I appear to have suffered no serious harm as a result.
How exactly has math or science changed since the 70s? What training in technology do schools provide that one can’t get from tutorials on youtube or, failing that, bittorrent?
Yes, I am really curious about this also. I went to college in the ’70’s, and apparently these young kids have all this great training that I missed. Okay, it’s true that I took mostly business courses, but I did take some calculus and physics courses. Obviously I didn’t have the kind of software courses that are applicable today, but I’ve often read in this blog and elsewhere that computer training is the one area where it is clear that a college degree often is NOT needed.
Some fields have changed enormously, the central dogma in Biology was 1 gene = 1 protein for a long time. Turns out that is completely wrong (but still allowed for a lot of knowledge to be gained using that assumption).
OTOH, that kind of extremely narrow trivia is entirely useless to the grand majority of people who go to school. Even Mendelian genetics would be lost on most of them.
I am fairly certain that when people talk about a “1970s education” or the like, they do not mean using literal 1970s textbooks and instructing the teachers to ignore everything that has been learned since 1970. They mean using educational techniques and institutions similar to(*) those of the 1970s to teach modern knowledge and understanding.
*Presumably the computer science classes will use modern commodity desktop machines rather than dumb terminals linked to the mainframe in the basement.
@John
But it’s close enough for general education. It’s probably close enough for anything short of masters’ and doctorates.
If we’re talking about primary and secondary education, you wouldn’t have computer classes _at all_ in the 1970s in most places.
I can’t see math changing, though. Nor physics nor what’s called “earth science” (heck, maybe you could actually teach more of those subjects if you removed some of the environmental advocacy which has replaced them).
Astronomy has changed quite a bit, even in the broad outlines (e.g. cosmic inflation, dark matter, extrasolar planets), but that wouldn’t impact teaching techniques… nor have much practical relevance for most students.
I suppose history has gotten a little bit longer; they certainly never taught about the fall of the Soviet Union in my high school.
The conventional wisdom I hear from professors at non-elite schools is that they now have to dumb down their introductory classes to teach what the students used to know coming in. That’s the opposite of your pattern.
A 70’s understanding of math would be adequate for any profession other than being a mathematician, or perhaps in an academic field with a preference for putting its arguments in the latest mathematical form to make them sound novel. What do you think has changed in the sort of math people learn in college since the 70’s?
The same is true for the science learned in college in any field other than the one you plan to make a career in–and you probably learn the cutting edge stuff there mostly in grad school. Newtonian physics, classical E&M, statistical mechanics, … all that stuff goes back well before 1970. There would be more new stuff in biology, but do you really think you have to learn that in college to go to med school or grad school in the life sciences?
Technology has changed more–but that doesn’t mean harder to learn, just different. Programming in any modern language, to take an extreme example, is a lot easier than writing machine code or even assembly.
I’ve read my dad’s college math textbooks from the Seventies. They are essentially identical to my college math textbooks from the 2000s, except in writing style. But I went to a pretty good, though not quite elite, college; maybe it’s different on the generic state college level.
Lil’ Rudin is still pretty much the standard analysis text for late undergrad/early grad, and it was written in the 50s. The third edition (what I used in school within the last decade) was published in ’76, but I doubt it changed much back then, either.
On the other hand, in algebra there seems to be continuing innovation in textbooks. For example, Hartshorne’s famous algebraic geometry text is in the process of being supplanted by modern treatments (Liu, Vakil). And for introductory algebra, I don’t think people really use the old school texts (Jacobson, etc), and instead rely on Artin, Dummit & Foote, etc.
Of course the innovation is not in content, but in pedagogy.
Schools:
1. Class sizes are a lot smaller. My 4th grade class photo in 1964 had 46 students. Probably one or two were away sick. The evidence that class sizes make a difference is very thin,. Below 5 yes, above 45 yes, but there is a huge range where there is no apparent difference. Visiting my old school a few years ago for 40th reunion the classrooms looked like the first class compartment in a 747 versus our economy class or worse.
2. There are a lot more admin support people. My old public school now has two (2) “community liaison” people i,.e. PR. The bottom line was half the students and 50% more staff – no counting staff at head office. And worse outcomes BTW.
3. Streaming is a lot less common (e.g. not done at all in many places). This makes for less efficient teaching due to the wider ability ranges. So whatever small benefit from smaller classes was lost by eliminating streaming.
Business: I think he underestimates the resources consumed by regulations. In my last 5 years as a software architect for , more than half our IT spend was on regulatory compliance projects.This was hundreds of millions a year, for one bank. And not counting the substantial compliance work within other projects.
It is not just direct costs but “not doing that, too hard” opportunity costs. Medicine is ripe for disruption except that the regulators will kill you (e.g. 23andme).
Amdahl’s law states that in any system, the parts whose efficiency you cannot improve eventually come to dominate the overall performance and cost. He was talking about supercomputers then but it applies to society as a whole. The parts that are not under pressure to get more efficient don’t, and come to dominate costs.
A bit of a nitpick but:
You are only thinking of smartphones, you can get a prepaid flip phone from walmart for 5$ that will still be infinitely better than old phones (for instance it will have features for alarms and a calculator). Even that’s overpriced because you could probably buy a flip phone for less than a dollar in africa (remember those stats on how the vast majority of the world’s population has phones: http://newsfeed.time.com/2013/03/25/more-people-have-cell-phones-than-toilets-u-n-study-shows/)
Phone plans are similarly vastly overpriced (well at least basic ones are), considering the cost of prepaid sim cards in africa.
Well this started out as a nitpick, but somehow became just another point about how everything in the US (and to a lesser extent many other first world nations) is horribly overpriced and inefficient.
It strikes me as odd- and probably very important- that this seems to be extremely US-specific. In Australia we have similar demographics, comparable GDP/capita and a lot of shared culture with the USA, yet have far cheaper education and healthcare (for the same or better outcomes). Taiwan is a bit more different, but not so different that you’d expect a 50-fold difference in efficiency for major infrastructure projects; the same basic argument applies to differences with the EU as well.
So either:
a) something has changed in the last 50 years in the US that hasn’t changed (as much) anywhere else (political? Cultural? Occult? Who knows?),
b) everywhere’s suffering the same cost disease and to the same extent, but the rest of the world besides the US is subsidised by the US (I’ve heard this argument in relation to healthcare- the US absorbs the initial cost of new treatments, for some reason),
c) the US has always been more expensive that everywhere else but that was previously masked by, say, really high productivity gains that have now stopped, or
d) literally anything else, this is really complicated and I am a simple biotech student with no actual knowledge about what I’m saying.
I like option A politically because it’s the simplest problem and the simplest solution (in the sense that it’s limited to one nation, which is easier to change than the entire global economy), and also because whenever I hear Americans complain about the costs of healthcare or uni my gut response is always ‘I don’t know what you’re doing wrong but whatever it is we’re doing it right, so seriously just move here, apart from the shoddy internet and regular bushfires it’s great I swear’ so if A turns out to be correct I get to feel all fuzzy and validated inside.
Apologies if I’m stating the obvious; your comment on (B) about healthcare makes it seem like the mechanism by which the US is subsidizing foreign healthcare is mysterious and possibly mythical, so let me try to provide an explanation. In a nutshell, it’s a matter of the difference between R&D costs and marginal production costs.
Medical treatments, devices and drugs are extremely expensive and time-consuming to develop. In addition to the many hundreds or thousands of approaches that need to be tried to develop a successful product, the process of proving that the new product is both safe and effective costs an enormous amount of time and effort. A new drug costs between $150 million and $2 billion (depending on risk, possible side effects and interactions) and about 7 years to get approved for sale by the FDA — note that these costs do not include R&D. This is just the testing. (Also note that these numbers aren’t inflated estimates provided by the pharmaceutical lobby; this is the US government’s own estimate.)
But these are costs that only have to be borne once. When a new product has been tested and certified, the cost of production is usually insignificant by comparison (pennies per pill for most drugs). In economic terms we would call this a high fixed cost and a low marginal cost. Moreover, every drug, technique or device has a limited lifespan of intellectual property protection via patents: you can profit from it for a time, but in 20 years, anyone else is going to be able to manufacture your device for the marginal cost and you’ll have lost the opportunity to recoup your R&D costs.
Because the fixed costs are so high, the developer has to be confident that it can recoup its investment from selling the product before time runs out. Now, if the condition that the drug treats is common enough, you can sell enough pills to make your money back within the patent window, even if you’re selling to price controlled countries. Sure, maybe you’d like to make more money on each pill, but you’ll still turn a tidy profit.
Where this breaks down is when the drug is extremely costly to develop and has very few potential customers. Let’s say you’ve spent $300 million on R&D and discovered a new drug that cures Creutzfeldt-Jakob disease. You rush out to patent it, but now need to spend another $2 billion to get it approved for sale, since this is a very complex condition with relatively few patients who can be used for clean scientific trials. By the time the testing process is complete, your 20 year patent will only have 12 years remaining. In 12 years, you need to sell enough pills to recoup $2.3 billion just to break even. The problem is that CJD is very rare, affecting only about one person in a million each year. With a global population of 7 billion, this means we’ll get about 7000 cases a year. Assuming you sell your treatment to every single person on the planet with the disease for 12 years straight, you’ll need to sell it for $27,380 just to break even.
If you’re a drug company CEO and looking at those numbers, you can also probably presume that your shareholders will be rather upset with you if all you did with their $2.3 billion was break even.
Now price controls come into the mix and make it worse. “That’s outrageous!”, says the government of India. “$27,380 is 40 times our median income!” So India declares that you can only sell your drug for $1000 per course of treatment — still a ludicrously high price from their perspective, since it’s 150% of their median income, but they acknowledge that it is life-saving, so it’s worth it. Okay, so scratch off India’s 1/7 of the world population; you’re only going to get at most $12 million in our 12 year window from our CJD sufferers there. Now we need to get the rest of the $2.3 billion from elsewhere.
“Hey, wait a minute,” says the Chinese government. “You agreed to sell the drug to India for $1000 per treatment, so why are you trying to charge us so much more?” You point out that the median income in China is about $10k USD/year, around 15 times what India’s is. “Okay,” they say, “so we’ll buy it from you for $10,000 per treatment.” China then subtly implies that if you’re not willing to do business with the government, they will regrettably not be able to assist with vigorous enforcement of your patent if lawless compounding facilities in Chinese territory happen to appropriate your formula and start producing copies of your medication. Okay, so that means our maximum possible return from China is going to be $156 million. But that’s progress! Now we just need to come up with another $2.1 billion from somewhere, but we’ve already ruled out nearly 1/3 of the world’s population between just those two countries. Oh, right, and then we have to figure out how to make a profit so the shareholders don’t fire us.
By the time you work through all of this, you’re down to the United States last. You’ve worked through the price controls all around the globe and figured out that you can cover $1.5 billion of your development costs (Europe and Australia’s wealthier citizens helped a lot), but that still means you need to make up $800 million plus all of your profit on the US alone. With 300 million citizens, that means we can expect 300 cases of CJD per year; over 12 years, that means we have to get $800 million plus, say, 15% profit margin on our initial $2.3 billion out of 3600 unlucky Americans.
And so we sell it for $318,000 per treatment in the US. Because someone has to foot the bill, and Americans are the only ones who will pay for it. Oh, right, and this whole analysis still assumes we can sell the drug to every single CJD sufferer on the planet for the next 12 years and that all of our marketing and production costs are free. And it ignores the fact that we could have dumped the investment money in a low-risk index fund and made 7% annually, which is to say twenty times more money than we made in profit from this drug. (We are so fired.)
But note what the alternative is: not developing the drug at all because we couldn’t even break even, much less make a profit. The cost of this obscenely-expensive drug is going to save the lives of 84,000 people who would have otherwise certainly died over just the 12 years of the patent, and go on saving another 7,000 people (and growing, as the world population grows) every year thereafter.
So this is how (and why) the US subsidizes the rest of the world’s medical treatment. By working as the last and only wealthy, un-price-controlled market in the world and being willing to pay obscene prices, we enable drugs to be developed that otherwise never could be because there wouldn’t be enough of a market. All of the countries with price controls are free-riding on this, relying on the fact that companies will continue to develop new medical technology as long as they can sell it here.
There are a number of possible ways that this could be addressed to try to reduce the burden on the US, but there’s no getting around the fact that anything we do here will just be shifting the cost-burden onto other, poorer countries. I’ll save discussions on that for another post, though, as this is already obscenely long for a comment.
Excuse me, but there seems to be a serious flaw in your analysis: namely, the FDA is unique to the U.S.A., and most of the cost seems to be coming from the FDA’s testing costs.
Now, obviously you picked those numbers at semi-random. But if the FDA really is as significant a portion of the cost as you make it out to be in that one example, then it’s more like the USA screwing itself over, and possibly also subsidizing the rest of the world.
Yes, that’s an important point. This is a chicken-or-egg problem; I used the FDA approval numbers because they’re the benchmark standard for the world right now; if you can win FDA approval, most other countries will require minimal additional regulatory testing. The EMA (the EU’s FDA-equivalent) has a similar approval process, but they will fast-track any medication that’s already FDA-approved without going through the same formal tests. Since basically all medications are targeted for the US as their primary market for the reasons I outlined above, this makes numbers for the EMA approval process as a standalone very hard to come by; I doubt such figures even really exist because any pharmaceutical designed for sale in the EU market but not the US one is likely to be an extreme outlier for some reason. (As I understand it, basically all drugs that currently fall into this category are simply generics made off of well-known formulas; testing is for bioequivalence of preparation, not unknown pharmacological effects.)
That said, if the EU process is as rigorous as the FDA one (and I’m sure it is, after the thalidomide debacle), I would expect the costs to be similar.
As a counterpoint, though, let’s assume that the FDA’s approval process is 75% waste and that with heroic efforts you could cut the R&D+certification cost of your drug down to $800m. Now factor in the item I specifically said I omitted: that your total returns need to exceed what you could get from just taking your initial budget and shoving it in a low-risk index fund for 20 years.
That would mean you’d need to return $2.89 billion on your $800 million initial investment, which is higher than the $2.68 billion I used for the simplistic “+15% profit” calculation in the original example. So even if you assume massive waste in the FDA numbers, the real outcome is still even worse than the example I gave.
“Now factor in the item I specifically said I omitted: that your total returns need to exceed what you could get from just taking your initial budget and shoving it in a low-risk index fund for 20 years.”
Sorry, didn’t catch that part specifically. But I still have to ask where these numbers are coming from. Moreover, I’m not sold at all that the EU’s specifications are as tough as America’s, because the FDA really is something else.
I do acknowledge quite fully that America tends to subsidize Europe and other nations in this regard. But it’s also worth noting that some of it is undoubtedly self-inflicted. And when you put it in numbers in the way you did, well…you get the idea.
The FDA approval process costs were pulled from the HHS report I linked in the original post. The rough estimate of how much the R&D costs was a lowball of the number given in this study.
Hate to say it, but if you were hoping I pulled made-up numbers out of my ass to exaggerate the problem, the truth is that I deliberately understated just how bad it is, because the reality is so terrible that you can use very low-end estimates and still produce numbers that will shock most people.
Even the disease was picked to be something that’s rare (because this problem doesn’t apply to, say, male pattern baldness cures, which have an enormous potential market) but not that rare. You can see how bad the math is for a disease that (theoretically; actual incidence rates are lower due to specific risk factors and vulnerabilities) affects 7000 people a year — now imagine how bad it is for a disease that only affects 5 people a year. Even if someone was handed a formula for a cure on stone tablets from heaven, it would never be profitable to actually produce it under our current system.
Hey, this is a month late, but if you ever come back to this thread:
Inside of this article is this graph which shows drug chemistry produced by country per decade over the last half century.
Using google and wikipedia mostly, I also looked up the healthcare policies of the various countries mentioned, to see if there were any legislative changes that might correlate with changes in drug production numbers.
-France Healthcare delcared universal in 1974. 1976 and 1977 saw budget adjustments and reduced payment for medications. precipitous Drop from 1970’s to 1980’s.
-German price controls instituted mid-80’s. Notice drop from 70’s to 80’s, and 80’s to 90’s, as the before, transition, and after decades.
-Japan instituted price controls in late 1980’s. See 1980’s vs 1990’s.
-The UK has been nationalized since the 60’s. It is perpetually low across all four decades.
I’m not an expert on any international medical law or systems. But I’m curious if this chart adds to your point in a very general, demonstrable manner. Ie, government enacts more socialist, less restrained medical system practices, and amount of drugs produced drops.
Now, that article is asking a different question of “Does this really count as an American Drug if it came from a research lab in England?” but I don’t think that matters for the analysis here. Because as you say, inventing a drug is actually prety easy. Its done all the time all over the world in universities and research labs. It’s the testing and refining and regulation that cost the huge amounts. And all the companies HQ’d in the US are obviously going to be going for FDA certification on their drugs.
I’d appreciate hearing any interpretations you have on this data, and any correlations to medical policy of the listed countries. Are they at all valid?
Nobodies tuition funded Milo Yiannopoulos, his fee was paid by the Berkeley College Republicans.
BCR is an official student group, which means it gets funded by the student government, which in turn gets its money from a mandatory fee charged on all students at the university. Okay, technically it’s not “tuition”, it’s a different line item on the same bill, but close enough.
I can’t help wondering, if so many important things raise so sharply in cost in “constant” – i.e. CPI-adjusted – dollars, does it mean our CPI calculations are inadequate or maybe are not fit for the purpose we’re trying to use them (measuring the costs of things around us)? Or the other important things got drastically cheaper and it averages out?
As a programmer, I can’t complain about about my after inflationary earning trajectory.
Hey Scott, great post. Here’s a link to a research paper, John A. List, Quarterly Journal of Economics, 2003, titled “Does Market Experience Eliminate Market Anomalies?” It appeared as part of the perennial debate between classic market economists and behavioral economists. The classical economics are always resentful that the psychology types keep winning the Nobel Prize for contradicting classical economic theory, especially Kahneman and Tversky. In that vein, there are all these observable behaviors of market anomalies, like the Endowment Effect, and others. List and others made the claim that many of the anomalies only appeared because, basically, people didn’t know what they were doing. In this paper he shows that the endowment effect disappears in markets of experienced traders. The converse is interesting, because it reinforces that markets consisting of non-experts will likely be very imperfect.
The “cost disease” you discuss here is clearly in the category of “market anomalies”. Whatever the reason behind it, on the face of it, it’s clear that these are examples of situations where market forces aren’t working the way they’re supposed to. I would observe that one thing these examples all have in common is that they are all markets that EVERYONE is in. In other words, they are very much composed of non-experts.
I sent your post to a friend, and he pointed out that in sectors with well-functioning markets, prices drop. I think it’s clear that you imply this in the post; you seem to be asking, why are these things all non-well-functioning markets? What is it about these markets? You gave a partial answer by describing your own experience, which is that people are making bad (i.e. economically sub-optimal) decisions in these markets. I would argue that all the markets you list here are markets where the customer would naturally think that “their life depends on it”, in one way or another, which is definitely a recipe for emotionally-driven, non-rational decision making.
What would it take to get to a place where these markets functioned better? Some people argue for better incentives. Like, health care where the individual always pays a certain percentage of the cost, so they have incentive to make economic decisions about care tradeoffs. It gets very complicated, though, because people clearly don’t have the expertise to feel comfortable making all those decisions themselves, and indeed the medical system is set up to assume that someone else is supposed to be making the decisions (like in your example of the person who sees a psychiatrist after a heart attack). It’s not clear that making every individual make more of their own medical decisions would lead to a better situation, given that people are definitely not experts…
And so on with the other fields.
The List 2003 article:
https://academic.oup.com/qje/article-abstract/118/1/41/1917048/Does-Market-Experience-Eliminate-Market-Anomalies
I think there are different reasons for these. Health insurance and college: We’ve increasingly decoupled who is paying and who is getting the service. IMO, everything else in these fields is secondary to that.
Housing… may be not a “disease” at all. We have more people and the same amount of land. Regulations setting minimum square footages, maximum occupancies, numbers of bathrooms, eliminating rooming houses, etc, probably have something to do with it as well.
Primary and secondary education: Probably the government really _has_ gotten better at wasting money. Also scope creep, security (some schools literally look like prisons nowadays, with high fences and security guards), special ed.
Infrastructure: Lawsuits, environmental impact statements, and unions. It’s amazing we get anything built at any cost given how many people have an effective veto over any given project.
That would be a factor in e.g. Singapore, but in the United States there’s a huge amount of very cheap land out there. Most of the cheapest is in places where you’d be insane to try to build a city (though that sometimes doesn’t stop people; see e.g. Las Vegas), but if you go far enough out, it’s still possible to get twenty or forty acres of decent farmland (which is to say, buildable land with access to water and probably a climate that won’t make you want to kill yourself) for about the cost of an acre in the suburbs.
Sure, but land that isn’t near a suitable place of employment doesn’t help, and we’re not decentralizing employment centers all that much (if at all)
We are in fact going going in the reverse direction quite strongly: everything is centralizing into the cities.
That’s a problem, but it’s a different problem than “too many people, not enough land”.
Does anyone know why exactly employment opportunities are becoming increasingly concentrated in large cities? Is it because of some kind of synergistic networking effect whereby it’s easier for a company to find suitable employees in a city that already has other companies in the same industry located there, and therefore they will preferentially choose to locate themselves in these cities whenever possible?
Do employment opportunities function the same way as social networks? (By this I mean, can a certain geographical location become good for employment opportunities solely because lots of other potential employees are already living there? Much as the value of using a certain social networking website shoots up rapidly if everyone else starts using it?)
If this is true, then it would explain why cities that aren’t in “optimal” locations are nonetheless hotspots for employment: it’s because they already have lots of people living there, usually due to historical factors. And it also implies that any geographical location that doesn’t already have a large and/or growing population probably isn’t going to experience much job growth.
Of course, I could be totally off base here, I know very little about the way businesses make these decisions and my understanding of economics is rudimentary at best.
Yes, this is exactly it. The economic term for it is the agglomeration effect.
@Cypren
Awesome, thanks!
@ Nybbler – Nice short summary of the issues. I think best comment so far.
In any economy in any country, there are going to be some costs that go up more than others. The increasing ones won’t be necessarily for the same reasons. But it does seem that increasing regulations, increasing litigation, and increasing expectations of services all play a part. The areas that have the biggest increases seem to be most affected by these three things.
Population density in the US is less than 100 people per square mile, with 640 acres in a square mile that is 6.5 acres per person in this country. Even if 90% of land has other uses or is totally unsuitable for living you still end up with more than half an acre per person in the US.
New York City has a density of 27,000 people per square mile, we aren’t even within an order of magnitude of population where “more people/same amount of land = higher prices” makes any sense.
If the entire state of New York had the population density of a small to medium city- like for example Cleveland, Ohio (~5k per sq mile) then you could almost fit the entire US population in that state.
And that isn’t even one of the large states.
See the conversation with Nornagest above. The problem isn’t a national “too many people, not enough land”. It’s that land isn’t fungible, so it’s “more people competing for the same land which is desirable for some reason”.
Land for building cities is pretty fungible, most big cities are not located on ideally suited geological formation that allow for skyscrapers to be built cheaply. King of Prussia (Pop density <2500 per sq mile) is a 10 min drive (with traffic 3 days) from Philadelphia (pop density ~11,500), it has a major center of employment in the largest mall in the US, and is adjacent to a large park (Valley forge). There is no geological reason why this city will never produce high rises, there probably isn't any economic reason that it won't produce high rise living, as their is a large commuter base going from Philly out to KOP (and vice versa).
At Philadelphia population density levels of Philadelphia (95th in the country) KOP would have a population of ~100,000, but zoning laws prohibit this. We can't calculate exactly what would happen if KOP dropped it more restrictive ordinances, but just that 80k increase without a boost to the surrounding area would likely cause property value in Philly to fall (or stop growing at its current rate).
I’m not sure exactly what you are thinking when you suggest unions drive up the costs of infrastructure, but on its face this theory seems inconsistent with the much higher unionization rates in some European countries, and their generally higher power across the board.
To offer a counter-point: It may not be that things have gotten [relatively] more expensive, but that certain things – specifically those used to calculate CPI – have gotten [relatively] much cheaper.
It’s undeniable that food and manufactured goods have become immensely cheaper thanks to technology/globalization. If you use them as your metric for price changes, then in comparison other prices will seem overvalued. If you instead calculated CPI based on older technology (say using the price of organic farms or handmade toys), you’d see much less of an increase. So the story might not be that health/schools/etc cost more, but that they cost about the same and that food and consumer goods cost much, much less.
I brought this point up at Marginal Revolution (Tyler Cowen’s blog) and here are some of the responses I got:
—————–
—————–
—————–
I don’t know enough about this to have a firm opinion of my own.
It seems like the share of household income that Americans spend on groceries has halved since 1960. We still spend about the same percentage as we used to on eating out. This makes a bit of intuitive sense, in that eating out is something that we can do more of or less of in response to how much of our budget it appears to be taking up.
So I would generally interpret that as technology and globalization significantly driving down food costs, because food is one thing that is particularly susceptible to technology improvements improving efficiency and globalization making the underlying markets more efficient.
Healthcare and college are both “services” oriented around fixed physical locations. Globalization can’t effect the prices of these services in the same way it does food, so relative to food prices we’re not going to see the globalization bonus to efficiency. And Scott has made a good case here and elsewhere that increasing medical technology has not resulted in all that much improvement in medical outcomes. Likewise, a college educated engineer graduating in 2017 is not any more capable than one who graduated in 1960, modulo technology-specific skills. So as everything else becomes cheaper relative to healthcare and college, which don’t benefit from globalization or technological growth, the amount of our paycheck we end up spending on these relatively “fixed” services increases.
Also, at the risk of clipping the edge of the Dunning-Kruger vortex, people have always wanted to send their kids to college, but now more people can afford to. Demand has increased much faster than supply, so prices have risen. You don’t see a lot of new universities being built, but you do see the enrollments at large universities growing, and huge spending on construction at those universities as they try to absorb more students, and hiring more and more bureaucrats. Still, demand is so much higher than supply that the schools can just charge more and more and people will pay. The “cost” of running the university balloons because it’s a company in a growth phase.
The above paragraph is still true even before you take into account the predatory federal student loans, which allow people who can’t afford to go to college to do so anyway, at prices that make no sense.
A number of people have pointed out the possibility that, you know, companies aren’t actually that efficient, and lots of value is eaten up by irrationalities caused by internal politics and such. And I agree that this is an underappreciated problem (well, in certain contexts). But is there any reason to think that this should be more of a problem now than in the past? If not, this wouldn’t appear to be an explanation of increasing costs.
Is it possible that a lot of the cost increases are from ever more marginal added “comforts” and “features”, with pressure on institutions to “upgrade” facilities and services by people selling those things, just as this pressure exists for individual consumers? I think this is related to the risk aversion element, as that’s a psychological dynamic heavily leveraged by advertising/sales forces. “Protect your family”, “Don’t miss out”…
We are an antifrugal society, perhaps in part a consequence of reaching certain levels of wealth (and not knowing what to do with it?). Our institutions no doubt reflect this. Scott’s university example of added features (Milo and riot) is along these lines.
I think we are also increasingly relying on things to replace human virtue and capacity. The former is more scalable, at least it is perceived this way in our materially advanced but arguably morally underdeveloped cultures, but it can also be both more costly and less effective. What’s more effective and efficient, a community that values health and acts accordingly to preserve and maintain their health, or a health care system built to service a community where those values find little expression? The former is potentially basically free, or better than free in the sense of paying dividends beyond just healthcare savings (basic good exercise and diet can be incredibly cost efficient but require knowledge, motivation and discipline). The latter can be as expensive as you want it to be, with ever more dysfunctional humans propped up by ever more sophisticated technological and human interventions, all just to get back to “normal”. (Could motorized wheelchairs and scooters alone account for the increases?? /jk)
In other words, the collective knowledge and moral strengths of a society can make it more efficient; neither of these is necessarily dependent on sophisticated material means, though they certainly can help, for dissemination of cultural capital, new means of communication.
Anecdotal aside: Henry Rollins on JRE talks about how weightlifting changed his life, and it all starting with with coach who said “you’re a skinny little f*ggot, I’m going to teach you to lift weights”.Despite the harsh language, that love and attention no doubt increased Rollins ability to contribute to society. Non scalable interaction…
So I don’t know if that’s a potential cause or only a potential corrective direction, but its seems like it’s got to be in there.
Regarding tunnels, helpful post just now from Alon Levy in The American Interest: http://www.the-american-interest.com/2017/02/10/why-we-cant-have-nice-things-2/
A common feature here seems to be something to do with incentives. Are our systems somehow putting in place incentives for the actors within them that has this effect? Or to put it another way are our institutions stupider than the people inside them. In my own place of work I frequently think ‘I know what we should do here’ but I have no authority to do it and persuading the people two layers above me is a waste of their time and all downside for me. So I do the stupid thing and sometimes we end up paying money we really didn’t need to. Is it that we have put in place tons of procedures to make things better but that ends up being a set of incentives to do something done rather than disobey the system even when its dumb.
There’s nothing extreme about suggesting market failure. There are some necessary conditions for markets to work right, and they’re notably lacking in some markets, particularly health care. This is a major field of study in economics: http://www.scielosp.org/scielo.php?script=sci_arttext&pid=S0042-96862004000200012
People do have the option to go to a college which is about the same as the ones decades ago. It’s called community college. You can even do it instead of high school. People don’t do that mostly because they wrongfully believe that such an education is of vastly lower value than the social-clubs-and-Milo one.
Social pressure is definitely the cause of rising costs of child care. As in, it used to be it was fine to leave children alone at home. Now such behavior is nearly grounds for getting your children taken away. As a result people spend a lot of money on childcare which they simply don’t need.
But it seems to be a correct belief if you assume that the inherent value of a college degree is not the education, but the signaling value compared to your competitors for employment.
The value of that signaling appears to be high if you don’t adjust for the confounder that the sorts of people who get admitted to prestigious schools are already likely to do better later in life anyway. When that’s adjusted for the signaling value appears to be very small or nonexistent.
That makes sense for industries like tech which don’t have cartels regulating entry. But what about fields like law, where the top echelons are only really recruited from Big Law attorneys, who are in turn only recruited from top-tier schools? It seems like the signaling value there is essentially a hard barrier to entry.
In an extremely unfavorable job market like law is currently signaling becomes very important because there are so many applicants for every position that employers requires the signals on top of everything else just to keep the number of candidates they have to evaluate under control. But getting a law degree today is looking like bad value for your money even under the best of circumstances.
Is the fact that these areas are lacking features essential for a market to function properly a market failure, or a government failure? I’m pretty sure it isn’t the market setting the rules for education, or health care which result in third-party payers, for example. There is a significant difference between Phoenix/Houston and NY/SF in regulations and in housing cost changes over time. One pair has a much more functional market than the other, is that due to market failure?
Some of the lacking features have little to do with the government, as you may read in the citation Bram Cohen provided. Could you provide or cite meanings for government vs market failure that make them exclusive categories?
If you consider everything, including government, a subset of the class “market”, then you wouldn’t make a distinction.
However, the way I’ve seen the terms used most commonly is that a “market failure” is a problem inherent in a natural (or “free”) market, while a “government failure” is a problem caused by political interaction with a market. See this or this.
So the difference in terms is related to the assumed cause of the issue. A negative externality would many times be attributed to the former while a legal price control would typically be used as an example of the latter.
As a result of interactions between them (and yes, a government is also a political market in one sense), very little is exactly one or the other, but to blame government failure for something it exerts little control over (i.e. Internet comment trolling) makes no more sense than to place blame on the “market” for poor service at the DMV. Sure, technically the government doesn’t regulate Internet comments, so you could blame it for trolling and you could use a black market to get a mostly usable driver’s license, so you can blame the “market” for DMV services… but that would seem to remove what many people mean by the terms, using them to distinguish types of failure.
In the case of education it’s probably mostly a market failure, caused by the pathetic quality of information people get about the value of various forms of education. For health care it’s mostly market failure, although there are definitely places where government regulations are making it worse, particularly around the production and sale of generic drugs. Housing is arguably mostly the government’s doing, because the main problem is zoning and similar regulations getting in the way of more building. That’s a big part of the difference between phoenix/houston and ny/sf, although the other one isn’t really a market failure at all: There’s more space around phoenix and houston, so the supply of housing is naturally much larger, and people are so much more productive in big cities than outside of them that they’re willing to pay a ludicrous premium to live there commensurate with the ridiculous increase in pay they get.
How do you define market failure? If you’re right that it’s hard to get info about the value of different educations, that seem like it would affect government bureaucrats and parents alike, as far as deciding what is the optimal education path.
You may not be aware, but the Phoenix metro area is bordered on most sides by Federal restrictions on land preventing expansion (National Parks and Reservations). If you look at a map, you’ll notice Phoenix could only expand to the NW a bit, or a tiny part of the SE. Everything else is off limits due to Federal law. Comparing SF at the same scale, there appears to actually be more available land in the bay area, despite the water and coastline national park.
What’s massively different between the various metro areas is how easy/difficult it is legally to build someplace for people to live and how much the local government interferes in the market for places to live.
It’s slowly getting better in healthcare, e.g. Maryland has all-payer rate setting for hospital rates, and it’s working pretty well (O’Malleycare?)
Healthcare’s always going to have *some* trouble with market failures because we’re not going to give up the quality floor that high barriers to entry and regulations provide, nor are we going to make it completely excludable.
The main problem in health care is one of information. People go to the doctor to get their opinion about what procedures they need and trust them almost completely. This creates tremendous bad incentives for doctors and everyone who has influence over them to exploit patients.
That’s equally true of lawyers, architects, accountants, or almost any other highly skilled profession. None of those markets is as badly fucked up as healthcare.
I mean.
Lawyers are paid by the hour, but broadly speaking you go employ a lawyer when you have a problem and then the issue continues apace. In many cases, the lawyer accepts a portion of the winnings. And if they don’t, then the costs are usually quite high, which is why they’re either borne by the state, borne by corporations, or in the case of divorce borne either by rich husbands or not at all. That market isn’t exactly in great shape. But that’s at least a conversation worth having.
With architecture, you ask for a specific style or so forth. There’s only so much that can be added on, right? It’s not like the architect can tell you that you need an extra building or anything, at least not without you telling them to shove off.
And accounting is pretty much just – here, balance these books.
I think doctors need to earn trust if they want to have more patients come to them. They can do that by showing patients good results over time, either personally or by reputation. Hospitals that want a good reputation and want to limit liability have strong incentives to hire competent doctors.
I’m curious what you mean by exploit patients, but if you mean treat patients poorly, then I disagree as there are strong incentives to treat them well. Do you want to have a reputation as a doctor that is rude, gives bad medical advice, and is highly likely to harm you due to lack of skill, or a doctor that treats people decently, explains themselves well, gives relevant information about the problem and proposed solution as well as associated risks, and that has a record for making people better?
@AnonEEmous
I think you’re understating the importance of those jobs. in all three cases, you have to trust a stranger, an expert in an esoteric field, with something of huge importance to your life. the choice of the wrong lawyer/contractor/accountant can mean enormous costs, the right one huge windfalls, and you have no way to really evaluate how good a job they’re doing except by the outcome. The same is true of doctors.
I have been arguing for years that “market failure” is a misleading label. It describes a pattern, situations where individual rationality does not produce group rationality, that exists in a wide variety of settings, of which markets are only one. Rational ignorance of voters, for example, is a market failure due to the public good problem, although it’s happening in the setting of politics not market transactions.
Government failure, where it is due to the same causes, is a form of market failure. For more details read or listen to.
This is what I was hoping to get at through a more socratic method. I’m baffled that there seem to be fields where a naive separation of market and government and their respective failures persist.
Look at textbook prices. Look at drug prices. Look at how big hospitals and academic institutions operate. I know this isn’t really the point of this blog (and I really enjoy/appreciate trying to mechanistically break down these sorts of big problems) but I think we need to take seriously how much market ideology has infiltrated education and healthcare. I expect that these costs will continue to climb because as long as wealth continues to be generated and the economy continues to grow (no matter how lopsidedly), the relative value of things that are NEEDED will continue right along with the big ol’ pot of money. We will continue to pay as much as the market dictates we should pay because we NEED these things; our hands are tied. It’s pretty telling that “cost disease” happens when consumers have a significant structural disadvantage (generally speaking, most people want to live) in price negotiation and not for things like Xboxs and blenders. Hence why libertarianism isn’t compatible with a compassionate society.
I’m not the first to say this, but you need to factor in clothing and food into your theory. Costs for both of those have come down in price dramatically, but we need them too.
The cost of food has come down “dramatically”? I would like to see that data, because I’m pretty sure that’s completely untrue. Not to mention, the farming industry receives how many billions in subsidies every year? The government knows how important it is to keep food prices artificially low because the market would price people out of their food and things would get very bad very quickly.
It’s hard to argue that food costs have increased relative to inflation, but how do you separate monetary inflation from the cost of food? Don’t you think that something so fundamental to society has had an effect on monetary inflation? Inflation doesn’t just happen, it is effected by real world phenomena, and I would say that you can be pretty certain food plays a role in that. CPI is how inflation is measured, and CPI is directly tied to the cost of food.
In fact, if you take the price of bread in 1913 from this chart: http://inflationdata.com/articles/2013/03/21/food-price-inflation-1913/
and plug it into this inflation calculator: http://www.usinflationcalculator.com/
You will see that the predicted price of bread in 2013 ($1.41) is almost the exact same as the actual price of bread in 2013 ($1.422). You could attribute that to coincidence or just a reflection of prices not changing, but I would argue that it reflects how much the cost of bread has inflated, given the tight link between food staples and CPI.
Also, eggs shouldn’t be so cheap, wtf.
How CPI is calculated:
http://www.imf.org/external/pubs/ft/fandd/basics/inflat.htm
The costs of making clothing have vastly diminished because we can now just pay little Bangledeshi children cents on the dollar instead of anything approaching a living wage for an American. Clothing production costs have greatly diminished (both quality and wages) and consumerism has completely changed how we buy clothes over the past 50 years. People buy wayyyy more clothing than they used to (https://smartasset.com/credit-cards/the-economics-of-fast-fashion). Costs haven’t needed to increase to allow for massive growth.
This isn’t a foolproof theory, but I would guess that there is something to the fact that housing, healthcare and education costs continue to increase while most other things don’t, and that has to do more with fundamental flaws in markets than it does with “regulation”. If wages continue to stagnate but the market continues to grow, we could have a real crisis.
My impression was that this was to keep american farmers competitive with imported food, which has similar prices despite not being subsidizes.
I mean, we can still both be right here. Food subsidies for low income families are another hugely important piece of this. My point is just that food prices do continue to increase along with the market size increasing and that the government goes to great lengths to make sure that people get fed.
If farming subsidies are 20 billion a year, and all of it went to food production (ie no ethanol subsidies that increase the cost of food), that would be ~ $60 per person per year in food cost increases to cover, or $240 a year for a family of 4.
This feels just a bit oversimplified, but maybe I’m wrong. Also don’t forget about the 107 billion in low income food subsidies!
The US government intervenes in the food market to keep prices high, not low.
Your own chart shows that food prices have largely decreased. If bread has risen at about the inflation rate, then meat has as well, cheese slightly more, potatoes much more… and everything else much less. Milk, butter, rice, pork, eggs, sugar, even coffee.
It doesn’t show that food prices have largely decreased. Relative to inflation, some have decreased. But prices have gone up for literally everything on that list. My whole point is that to separate food prices from CPI and inflation is impossible because the two are so tightly intertwined.
But my original point which I feel like you guys are getting away from, is that with things like healthcare and education and housing, consumers have much less power to negotiate costs with the market. No one wants to drive down production costs of education (we still want to maximize efficiency, of course, but can’t do the sort of cost slashing like clothing companies do) and healthcare, so in order to keep growing (which market ideology demands) costs need to increase.
You are making me wonder about the extent to which the problem areas SCC identifies have in common a relatively high proportion of the people in those areas living in developed countries vs the USA. Food and clothes are cheap because many of the people dependent on those industries live in developing countries, whereas housing is mostly locally owned, and education and health are mostly services provided directly by local employees? This would suggest, perhaps, that wherever we have been able to undercut costs by relying on people in developing countries, they look like they grow at inflation or below inflation, and wherever we have not, over inflation.
That’s interesting.
The main purpose of federal farm programs, starting in the New Deal, has been to push farm prices, hence food prices, up, not down. Similarly with European practices.
The pattern you describe may well exist in the third world, but in the developed world it’s the exact opposite.
Many many more billions are spent by the government on making food affordable for consumers, but you’re right that farm subsidies aren’t a good example of my point.
I am saying that food prices have gone up and that they directly affect how we measure inflation. So if you accept inflation you also have to accept that there has been a concurrent increase in food prices.
See this site. Scroll down a page to see the graph. It shows that the food portion of American’s budgets have decreased from about 42% in 1900 to about 13% in 2003. The graph also shows clothing down dramatically, but housing up. So there is no relation between need for a product and its tendency to go up or down. But the way housing is different from food and clothing is that there are fewer regulations on food and clothing sales than on housing. Also that food and clothing is easier to mass produce and ship around the world, which makes such products much more immune to the cost disease Scott refers to.
Mark, I’m not denying that total wealth in the United States has massively expanded. That is largely what’s driving the relative decrease in cost of food and apparel, not a decrease in absolute cost. And besides, my argument wasn’t that needs, per se, will always rise in cost. I am saying that there are structural inequalities to negotiating prices for needs relative to frivolous consumer items, and it is telling that many of these constantly rising costs are for things we need. This growth-first market mindset is fundamental to education and health care, and relative to food and clothing (where costs of production can be cut), education and healthcare aren’t going to all of a sudden start outsourcing to Bangladesh or take other cost saving measures (largely because people want these things to “improve”). The rise of the healthcare industry is treated as a success story because it has driven growth in American economies in recent years (look at Pittsburgh or Cleveland) but the only way they’ve been able to manage ths growth is by doing more and charging more. Look at UPMC and its integration of insurance and hospitals. Education is treated as less of a success story, but many institutions are run like corporations, despite being “non-profit”. Instead, they just buy massive amounts of real-estate as investments. How long is this growth sustainable?
Food as a share of household budgets has fallen dramatically. See Table 7—Food expenditures by families and individuals as a share of disposable personal income.
https://www.ers.usda.gov/data-products/food-expenditures.aspx
And keep in mind those Bangladeshi children would be starving without that work due to the general mismanagement of their country.
there is something to the fact that housing, healthcare and education costs continue to increase while most other things don’t, and that has to do more with fundamental flaws in markets than it does with “regulation”.
It’s not a coincidence those are the least market-oriented, most-regulated industries (e.g. the degree to which gov’ts in California have refused to entertain proposals to build more housing seems to be driving costs there more than anything else). The difference is particularly obvious in energy costs, where “investments” in green energy have resulted in higher costs and blackouts for energy consumers in those states and countries. Naturally, there’s a strong lobby that insists none of these problems are caused by green energy, since they can’t survive without massive subsidies, but they’re not very persuasive given the correlations. https://wattsupwiththat.com/2017/02/09/south-australia-heatwave-wind-power-collapse-rolling-blackouts/
“Q: How did socialists light their homes before candles?
A: With electricity.”
““LOOK, REALLY OUR MAIN PROBLEM IS THAT ALL THE MOST IMPORTANT THINGS COST TEN TIMES AS MUCH AS THEY USED TO FOR NO REASON, PLUS THEY SEEM TO BE GOING DOWN IN QUALITY, AND NOBODY KNOWS WHY…”
This may be a little facile, like blaming it on increased adminstrator positions at school, but…how dynamic are firms in education, health care, hell even infrastructure?
All of the companies I have been involved in go through some natural feast/famine cycles, and they generally take on a lot of expensive managers, consultants, etc. during the cash-rich times. Those people are the first to go during a downturn (even minor downturns, not stricly recessions or mass layoffs). Maybe it seems like this would be obvious enough to figure out. Sum up the cost of inputs and see which is changing as a % over time. My guess is labor.
I also have problems with using CPI to measure quality of good/services, but I’m not sure how to articulate that yet and I have some reading to do.
Total shot in the dark, rampant speculation, with nothing tangible to show for it, but nonetheless a thought I had:
“Doctors used to make house calls; even when I was young in the ’80s my father would still go to the houses of difficult patients who were too sick to come to his office.”
Is this maybe a hint as to the core of the issue for at least some of the examples? Are we maybe just smacking bang into inefficiencies of scale? Like, to pick a stupid example by way of analogy, algorithmically the travelling salesman problem doesn’t intuitively look like a big problem until you’ve got more than a handful of nodes, at which point you need to solve it with heuristics. Are we still trying to solve problems we should be tossing heuristics at with algorithms? Are there maybe even no heuristics we can employ the problems? Could we be witnessing some kind of networking overheads we’re really bad at seeing?
I guess this might be easy enough to explore in principle, though! How does this correlate with population sizes (after adjusting for potential supply and demand changes that might come with that)?
Not sure this came up, but one thing that has dectupled or so is complexity. In the 70’s, there was not this massive customization of education- kids were smart or dumb. I suspect it’s the same for medicine- the number of options is massively higher. And colleges have increased the options to students (which you point out) and the administrators to manage them. The infrastructure to manage all this is a lot more expensive, after accounting for things like falsely high prices to account for transfer payments to those who can’t afford it.
Look at something that has not gotten more complex, as the null set. Air travel. It still costs about what it did in the past and has been coming down since 1995. The product (747, 737) has gotten more efficient and safer, but the design is basically the same. The ATC rules would be same story. Employees of the airline are probably not as happy as they once were, but hey, your kids aren’t dumb anymore !
Same for Automobiles. Like Nurses, they are the same inflation adjusted cost. Just the system is more complex so everything around them costs more. It probably costs a lot more now to own a car than it used to, but the car itself is the same.
The price of concert tickets has also risen much faster than general consumer prices. Note that is a partisan source and makes partisan arguments, but there is no real reason to believe the graph specifically about concert tickets is wrong or misleading. I don’t think there is any real doubt that concert tickets have in fact gotten more expensive relative to anything else.
I’m assuming concerts haven’t actually gotten better. It seems fairly clear there aren’t any burdensome regulations or licensing requirements involved. Nor is anybody forced to attend concerts. It seems like a standard sort of market, like other consumer markets.
Krueger says that sales of recorded music have declined, so performers have raised ticket prices to compensate for lost revenue. According to that argument, actually tickets were underpriced before, and performers were willingly accepting less than they could, because they were comfortable with the lifestyle supported by sales of recorded music. That argument seems strange to me. At the very least, it requires some more evidence. Krueger quotes David Bowie, but that performer talks about touring more, not raising prices.
My own layman’s opinion is that society has simply changed its mind about what concerts are worth. Concerts themselves have not changed, but they have become more valuable to us. It has become more important for us to attend them. Performers have noticed this and have raised prices accordingly.
I think I failed to understand how strange the concert ticket market is. As Krueger notes, tickets have in fact been underpriced, and the existence of scalpers is proof of that. There was this sort of tacit agreement that performers would try not to “charge too much” in exchange for the loyalty of their fans.
But when performers started noticing that their audience, when given the chance, would listen to their music for free over the internet instead of going to Tower Records and buying it, maybe the performers felt a bit like the old tacit agreement was broken and decided to raise prices on tickets, absorbing the margins previously taken by scalpers.
This interpretation hinges on whether the ticket price data shows the original sale price for tickets, not the final price paid by attendees. I do not know if that is the case or not. It also implies that scalping has become much less profitable. I’m not sure about that, but it doesn’t seem right given that there are new entrants into the market such as StubHub.
Only commenting to say, the existence of scalpers does not prove tickets are underpriced. The venue sets one price for everybody (or one price per section or what-not), but the scalpers need only sell a sub-set of the tickets to the sub-set most willing to pay (with a particular time crunch to boot). It sounds like jkl is correct that if it’s mostly just what Bowie said we need more evidence. The frequency of sold-out concerts for certain performers better argues that tickets are underpriced, though they may well be priced within an appropriate range of uncertainty.
You know what they say about things that seem fairly clear…
For a venue, you have all the standard commercial space regulations/licensing (fire code, workers comp), on top of food service regulations/licensing, on top of your liquor license, and all of that intermingled with ever-increasing rent (especially since you’re in a downtown area) and the costs of maintaining your sound system (and don’t forget getting it repaired every time some drunk idiot spills a drink on your mixing console or the Butthole Surfers decide it’d be funny to set your monitors on fire).
The rest isn’t as starkly about regulations or licensing, more just general economics. On the tippy-toppy-high-end – the stadium shows for top-40 lip synchers and boomer revival acts – you have a lot of extra employee costs to factor in, so if a promoter can’t pay you at least $X, you’re better off keeping the doors shut. For the club circuit, sometimes the atmosphere of live music isn’t exactly conductive towards maintaining one’s liquor license, and with increased gentrification in the cities, you can start getting by on canned music and overcharging for gluten-free yuppie chow. Or you could hire a DJ to play canned music with cross-fades and a drum-loop and not have to worry about providing a large stage and versatile sound system.
Meanwhile, for the band, the cost of travel has increased, and with digital distribution there isn’t as deep a pool of “people who are into your stuff enough to go to a concert, but who are unable to buy your albums”, so you can’t count on merch sales as much as you could back in the 80s and 90s. Not to mention that a lot of venues have either shut their doors or gone DJ-only, so there’s far less opportunities to be a house band, and more pressure to make each show financially worth your while – it’s a lot, lot harder to get by on the margins (“we’ll play twenty shows and make twenty bucks each night” versus “we’ll play two shows and make two hundred bucks each night”).
I would argue that instead of looking at “where is the money going”, you look at “where does the money come from”. I think if you take a random nonprofit and give them a billion dollars, they’ll spend it, and most likely they won’t use it on something nefarious. To answer the question of why the extra money is going to random potentially frivolous overhead costs of (health care/ academia/ transportation) I think the harder question is not “what are these overhead costs” but “where is this money coming from” and “why is the money not going to (doctors/ researchers/ commuters)”.
Let me give an example. Academic here. Over the last decade I’ve been in multiple universities. Each one has had renovations in my building or a building near me. All the renovations cited “safety” with the buildings being from some random sample from the 70s, 60s, 50s, 40s. I find it hard to believe that all these safety issues are suddenly cropping up *now*. You could say that the renovations are a consequence of paranoia over lawsuits involving asbestos/ structural integrity/ etc. You could say (as Atul Gawande argues) that we’re prioritizing “disruptive” solutions to incremental ones. All of this would be accurate. But if it weren’t construction, extra funds would go to something else. It’s not hard to waste money.
Now the real question is, where is the extra money coming from and why is this extra money not going to researchers. In academia, the extra money is coming from increasing costs of education, which are a consequence of (social signalling/ rigid rankings-based consumer preference/ globalization/ … you name it).
Why isn’t the money going to researchers? I would guess that this is a simple case of supply and demand. Academia is an exclusive field. There is a limited supply of PhD’s. People who stick around after their doctorate are either incredibly good or incredibly dedicated to doing something “meaningful”. Because there is a “higher purpose” component to academics, these people are often unwilling to do anything else. This means that we have an extremely inelastic supply of labor (it’s probably similar for medicine). Therefore, so long as universities provide a livable income and adjunct professors don’t literally have to moonlight at McDonald’s to supplement their income, they will stay in academia no matter what.
You see this kind of imbalance in any industry where the supply of labor radically exceeds the demand. I’m working in video games at the moment, and the industry has a well-deserved reputation for working people very long and hard hours (12 straight months of 70+ hour work weeks for salaried exempt employees are not unheard of) for pay well below tech industry average. But there is no shortage of young idealistic 20-somethings fresh out of college who love playing games and think that making a career in the field is all they’ve ever wanted. So much like with grad students and adjunct faculty, the labor conditions don’t really improve because the institution has no need to keep you around; there are a thousand people waiting in the wings to take your spot if you leave.
@Cypren – Before I got into the industry, I spent some time taking classes at a community college, and ended up getting a job as a TA for the Game Design Program. The entire experience was enormously depressing. Dozens of students in a modelling class showing the three or four cubic primitives they’d crashed together for their 2-month-long project. two people in the whole program knew how to UV; no one knew how to make a normal map. Art assignments involved drawing stuff with crayolas. People dreaming of QA jobs as a way to “work their way up from the ground floor”. The worst part was dealing with my boss, the head of the program, an honest, highly decent human being who nonetheless saw all this as offering students opportunity to grow and learn, and was constantly talking about how Game Design was the hot new thing for academia and had all these amazing opportunities… The students were terminally underachieving, and the profs were coddling them to a point that seemed immoral.
The good news is that at least at the three game studios I’ve worked at, our staff were all pretty highly qualified from top to bottom; I think most of the people you’re talking about get weeded out before they ever make it into the industry proper. The bad news is that we also had lot of very skilled, highly qualified people who made the mistake of trying to enter as QA and quickly learned that the difference between QA and Dev is the difference between enlisted and officers in the military: no fraternization, minimal chance for promotion from the lower class to the higher.
Anyone who tells you to go QA to “get your foot in the door” is sadly unfamiliar with the realities of life in the industry. That might work at a very small studio, but it definitely does not work anywhere that’s doing AAA games unless you are an extremely shrewd political operator.
@Cypren – Sure, none of the places I’ve worked would hire them either… But the kids didn’t know that. Their complete lack of skill was blindingly obvious, but no one in a position of authority was giving anything but asspats and attaboys. It got to the point that whenever I met a new student, I’d show them the “always be closing” speech, just to try and restore some sense of perspective. Letting students enroll in a college program advertised as preparing them for a job but that doesn’t actually prepare them for a job seems criminal. Communicating to them that QA was not a realistic path forward was just another crucial fact that didn’t get taught.
Wikipedia suggests that almost all of those other countries have litigation rules that make weak civil cases more costly, which seems like evidence in favor of the litigation hypothesis. It also means that there’s a relatively straightforward solution.
On schools: we should admit that the domain of schools seems to have expanded massively. Since the 1950s my school added an orchestra, an art program (with kilns and other expensive gear), a newly-renovated football field, computer labs and programming courses, police officers and security cameras, and endless extra-curriculars to get students into college. None of that appears on a math test. This may be stupid spending (criminalizing teenage fistfights almost certainly makes the world a worse place), but it’s not cost inflation; it’s just scope creep. I wish someone would calculate how much of the price hike comes from those things.
(Incidentally, that $3000 private school you mention seemed to count on massive amounts of volunteer labor from well-educated parents. Not exactly a fair comparison.)
But I’ll grant the larger point: lots of money is vanishing for no good reason.
There was a challenge buried in your conclusion: would the no-free-college crowd accept handouts at the quality per dollar we used to have? I’ll vote yes. If we could get an 80% reduction in college costs and keep it that way, I’d happily bury all my libertarian principles and theories and go “we spend taxes on lots of stupid stuff, this at least has a great payout”.
computer labs and programming courses
This. With all the government push towards “we must churn out high quality high skills employees for the information economy!”, such classes will be started earlier and earlier and the equipment etc. (including paying software licences and paying subscriptions for the online courses) are going to be pricey. A basic 1970s/80s education that would have got you a relatively decent job isn’t even at the races nowadays.
More mysterious than the question of why hospitals and universities don’t offer the old standard of service at the same price, is why they don’t try to offer a better standard of service at the new price.
If a hospital could actually cut costs by creating shared wards etc, why doesn’t someone create a hospital with these various cost cuts and use the surplus to poach the most highly regarded doctors at a salary premium. This would then get round the status issues (you’re visiting the most well regarded doctors) and presumably get the doctors on side once they see their new paychecks. If those doctors really are better, you can plaster advertising around the city highlighting your improved survival rates etc.
Fancy restaurants “waste” a lot of money on having fancy china, tableware, dining decorations, and service.
In theory, a restaurant could cut costs by adopting the service standards of fast food restaurants and use the surplus to poach the most highly regarded chefs at a salary premium.
Fancy restaurants aren’t the only place you can get food. The medical sector is the only place you can get medical care. (Well, apart from DIY gray-market stuff.) If there existed a medical or educational or infrastructural equivalent of grocery stores or bodegas or fast food, we wouldn’t be in this mess.
I wonder what the equivalent of those things would be.
I regularly call for the existence of a “wal-mart for medical care” which usually causes people to recoil in horror even though it would probably save people millions of dollars….
No one would get their healthcare there, just like no one shops at Wal-Mart.
…I really hope this has an implied /s tag.
I think they are called urgent care facilities. Small, maybe one or a few doctors.
Yep, and they’re very convenient compared to waiting for an appointment for your primary care doctor or going to the ER.
I haven’t tried it yet, but my health insurance also has a webcam option to see one of a set of on-demand doctors through a skype-like interface. I think they can prescribe you drugs if needed, all while in the comfort of your home. I’ve got to try it out at some point so I can see if it’s effective and worth the cost.
Regulations vary from state to state. Last time I checked (over a year ago) most states they could prescribe, but a few large ones were still not allowing it.
I don’t actually think of this as something extraordinary, but then I may have a different worldview from most people reading this.
In my worldview, the “normal thing” is that any organization consisting of people will, over time, grow more and more inefficient and incompetent, suffering from mission drift, parasitism, and simply the loss of the driving will of the people who founded it. It will slowly burn through the established capital that lets it function and eventually get taken down by newer, more efficient, predators, who compete among themselves for the chance to be the one who replace the old, established system – and then go through the same process, becoming established and starting the slide into decay. Ibn Khaldun’s theory.
The things Scott is looking at are things where an Ordered Way Of Doing Things has been established by Organizations In Charge Of Ordering Things; the AMA was incorporated in 1893 and founded about fifty years before, the basic format of building codes to 1905, the Department of Education as part of the government to ’67 and the public school system was, per Wikipedia, fully established by the 30s. It’s no surprise they’re showing strain, given that anyone who was an adult when they were founded is long since dead. But since the US government is propping them up, and the US government is showing few signs of disappearing any time soon, they’re allowed to keep growing less efficient without being competed out of existence. We’d logically expect them to be inefficient, just as a result of age.
The interesting thing about this theory, to me, is that it doesn’t explain colleges. Businesses are allowed to select for employees without considering college employment; they’ve just been using college employment as a proxy for competence, and there’s no reason they have to do so. If this theory is true, you’d expect the old established colleges to start getting outcompeted by places like Codeacademy – for programming – or five-dollar Kindles – for, say, history – with the result that the system of “go to college, get a degree in anything, get a job” will stop, which itself will have the result that either the “traditional college system” bubble will collapse soon as nimbler competitors cut it to pieces, or the government will step in to stop the bubble from bursting.
So that provides me with a test for my worldview. Hopefully I haven’t missed too many confounders.
A college degree might have some value and still not be worth its cost. e.g. if the degree costs 100k, but provides 10k in value to an employer, it’s worth it for every employer to hire only grads but society as a whole comes out 90k behind. And since the cost isn’t paid by consumers, they have an incentive to go to the 100k school that gives 10k in value over the 5k code academy that give 5k in value.
In your example, people would choose to skip college and then offer to work for 15k less than people with college degrees, and the companies should hire them.
The important part of the story is that education may play an important signaling role. i.e. it might provide 10k in direct value (in higher productivity), but successful completion of college might *signal* that one is worth 150k (on average) than someone who doesn’t cut it.
Then it’s rational for students to go to college, and for companies to hire people who went to college. This outcome might even be better for society than having no filtering system (if that means inefficiency in sorting people between careers), hough it’s second-best relative to a more efficient screening system.
They are facing this kind of competition in the tech world. There’s still definitely a strong cultural bias in some institutions towards people with formal computer science degrees, and for certain jobs in tech, I would agree that it’s entirely merited. (If you need to design new algorithms, you either need to be a mathematical prodigy or you’re going to need the foundation that a traditional computer science education gives you.) But a reasonable number of our line-level programmers in technology companies are people without degrees. (Note that this is much less pronounced in non-technology companies that employ programmers, though; bias in favor of higher education tends to be much stronger in non-technical hiring managers.)
But there’s a key difference between technology and many other industries: it’s fairly easy for us to sit a person down, give them a hands-on test and assess whether or not they have the necessary thought process and skills to be a competent programmer in a few hours. And this test is sufficiently clearly-related to the job function that it can pass a “business necessity” test in a court. Many industries don’t have this advantage, and the college degree is the best proxy they have.
Speaking as someone who’s done a number of tech interviews, I can’t imagine how I’d manage if I couldn’t filter applicants on whether or not they could actually do the job. I’d end up with some kind of weird cargo-culty thing, a combination of pointless credentialism and horribly discriminatory soft-skills stuff.
I meant it when I called it “a closed-loop Human Centipede of lies and fakery“.
What’s really terrifying is when you talk about hiring practices with almost anyone who works in a non-STEM field (or art, since you can review portfolios), you realize that this is exactly how they select people. It’s one reason why the bias towards hiring former coworkers is so high in professions like sales and PR: it’s one of the very few ways you have of selecting someone based on known performance characteristics.
I used to hire accounts payable clerks decades ago. I am a bit confused by the way you hire people. No one I hired had done exactly the sort of thing we were doing in my company, so I couldn’t tell directly how they’d do. When I interviewed for new jobs, I would never look at a job that was essentially the same thing I had been doing before; what would be the point of moving? This is especially true for professional jobs — aren’t most people looking to improve themselves with a new job?
What worked best for me was I gave a math test to all my interviewees. This broke out the sharp and less so pretty well. I had interviewees saying that they could do better if they were trained in it, but what I was testing for was their intuitive skills and not how they’d been trained. Luckily it was a pretty small company and HR didn’t realize how we could get killed by Griggs.
But I don’t understand how you can tell explicitly if someone can do the job, unless they are just doing what they’ve done in the past, which doesn’t sound like a very difficult job. Even my clerks needed to have more skills than that.
Assuming you mean “that the interviewee had been doing before”, there are plenty of reasons to move, including better salary and mismanagement at the old company.
My understanding of Griggs might be off, but I think if there was any math associated with the job then you would be in the clear.
I don’t think this is true. In Griggs, the implication was that the test had to be specifically what the job entailed, and so I think a generalized math test would not work. Although my test was specific math they would be doing on the job, so maybe it would have been good.
In any case, I did get slammed by HR about a test about 15 years ago, at the same firm, but after it had grown a bunch. In this case I was hiring an accountant, and I wanted to be sure this accountant was used to dealing with large Excel spreadsheets. So I made up a test that had about 50,000 lines, where I asked the interviewee to do some various math calculations that involved sorting, filtering, totaling, etc. (not exactly what they would be doing, but requiring the same skills). HR told me “no,” my test wasn’t “validated.” I think the HR person was pretty confused, but they had the principle right that tests are dangerous when it comes to litigation.
It’s important to distinguish the house prices and public works stuff (probably mainly regulation related) from the prices of various services (Education, healthcare, etc).
The latter is classic Baumol Cost Disease, and it’s pretty straightforward to explain. We’ve had a huge relative decline in the price of many manufactured goods (due to globalization and productivity gains). Simultaneously, there’s been little increase in productivity in service fields where the bottleneck is man-hours. This includes things like gardeners and barbers, which we don’t worry about too much (because wages of low-skill workers have stagnated). But it also includes things like education and medicine, where the workers are highly skilled (doctors, nurses, professors, teachers, etc).
Since skilled labor has become relatively more expensive over time, the result is a large increase in the relative price of the output of these sectors. Also, demand for these sectors’ output is probably fairly inelastic for various reasons (like health insurance, subsidies, and the education tournament, itself driven by high returns to schooling, which may be mostly signaling).
I’m not saying this is a good and efficient outcome — the salaries of managers and lawyers is going up too, and somebody is paying for the cost of the regulatory state. And maybe a lot of the smart people working in academia are wasting their time setting IQ proxt tests for kids. But I think you’re wrong to think things are worse than the GDP numbers indicate.
Admittedly, I’m not sure how this squares with the numbers you give above for salaries of educators and doctors. At this point, I’m just going to deny the data, since it contradicts what I’ve heard and read from economists working on these questions. I have some hypotheses about what might be going on. A good start would be tracking down a breakdown of the costs of a large university or hospital. I think you’ll find it’s mostly personnel costs, driven by doctors/faculty.
>Admittedly, I’m not sure how this squares with the numbers you give above for salaries of educators and doctors.
For at least teachers, the data quoted is just wages, not wages and benefits, and benefits are a large part of their compensation. Two, the number of teachers has increased dramatically relative to the number of students. To re-state Baumol, we’re not just paying our string quartet more, we’re hiring a whole set of understudies and some people to administer them.
I’m in my 70s and my wife recently had hip replacement surgery.
In talking to nurses at the hospital hardly any of them worked a 40 hour week.
The more typical pattern or norm was 36 hours in four (4) days.
Your comment on we are still working a 40 hour week is incorrect.
The BLS data shows that the average workweek in services is now 32.4 and for all private employees it is 33.6.
The good sector of the economy is still working a 40 hour week, but the goods share of employment has been steadily falling for decades.
I don’t have time to give the full explanation here, but this is all due to stealth protectionism that results from the triangle trade between opec, China, and the US and the ensuing Red Queens Race of elite competition that funnels money to the top of the economy. Stirling Newberry explained it over a decade ago, but either no one understood or no one cared. I suggest reading his old DKos diaries from 2005 and 2006 to get a fuller picture: http://m.dailykos.com/blog/Stirling%20Newberry
I’m inclined to ask, does *nobody* know what the money is being spent on?
Say you make two pie charts for breakdown of cost for subway construction, college, health care, whatever. One for 1970, one for now. See which slices have grown relatively bigger over time. The fact that I think that’s a fairly obvious place to start looking probably means I don’t know enough about economics. 🙁
I’d suggest it’s more that you don’t appreciate how good rent-seekers are at hiding their rents. 🙂
If a single line item in a budget suddenly explodes out of proportion, it will be investigated and people will be looking for something to cut. But if lots of individual line items slowly grow over time, a lot of money can be siphoned off without showing an obvious and simple cause. This is what’s happening in the named industries, for the most part; if you look at education, for example, no single line item (teacher salaries, administrator salaries, facility costs, etc) has exploded out of proportion to the others compared to what it was in 1960. But all of these items are orders of magnitude more than what they were back then.
Just a comment on the teacher’s pay section: Public sector unions frequently bargain hard for more benefits for two reasons. One is that they can then argue the case that they are underpaid with graphs that show flat wages but leave benefits out, and the other is that public officials will often happily trade future obligations for current concessions because their accounting is often set up so that taxes don’t have to rise much in the short term to “pay” for them.
This is well understood as the cause of the substantial rise in light airplane prices since 1970. A single-engine, four-seat Cessna 172 cost an inflation-adjusted $77,000 in 1970. A substantially identical airplane cost $163,000 in 1986. And went out of production the next year, because people weren’t willing to pay that price. When congress passed laws relaxing the manufacturer’s liability for older airplanes, Cessna was able to reinstate production in 1996 at an inflation-adjusted $190,000. Today, the price seems to literally be “if you have to ask, you can’t afford it”; the manufacturer only advertises fleet sales, but I’d estimate about $400,000 (of which ~$100K is fancy electronics that didn’t exist in 1986 and weren’t standard in 1996).
In this case it is particularly easy to pull out the lawsuit/liability effect because there aren’t many cofounders. The 1986 Cessna is so little changed from the 1970 model that they sell at about the same price on the used market when controlled for condition and total flight time. And fear of lawsuits didn’t manifest as safety enhancements of inscrutable cost and value, because light airplane crashes are almost always due to Stupid Pilot Tricks and almost everything that a manufacturer could do to mitigate that (e.g. tricycle landing gear) was standard in 1970. But the manufacturers still get sued, and have to pay millions, so there’s nothing to be done but pay for liability insurance. And, second-order effect, cut production when your customers start balking at the increased prices, so you have to amortize the fixed costs of actually building airplanes over a smaller sales volume.
So, a doubling in price over fifteen or so years, a quadrupling over fifty years in spite of Congress noticing the problem and trying to mitigate it, attributable to safety/liability concerns but not resulting in actual safety improvements. I have no trouble believing something similar is happening in other industries but is harder to discern because too many other things are happening at the same time.
My favorite example of ridiculous order-of-magnitude type cost increases is nuclear power plant construction costs. The plots from this paper illustrate it nicely.
https://www.researchgate.net/figure/292964046_fig6_Overnight-Construction-Cost-and-Construction-Duration-of-US-Nuclear-Reactors-Color
Except, in the case of power plant costs, the causes – at least, for the increasing US costs – are quite a bit more apparent. Pre TMI, US costs were in line with the rest of the world’s cost. Post TMI, not so much. New regulatory burdens all by themselves increased the cost of new plants by a factor of ten. Now, this is of course not proof that any of the other problems that Scott mentioned are entirely – or even mostly – caused by increasing regulatory burdens. It does however, show that government institutions as awful as the NRC do exist, and that their effects can raise costs by the amounts seen health care, education, etc…
Another way to look at this problem is that there are 3 salient features of these sectors: 1) Rapidly increasing prices 2) stagnating individual wages within the sector and 3) no obvious massive pool of profits being received by anyone. By economic theory, the only way to square that circle is declining productivity. That’s been Cowen’s default answer to the conundrum (and a sensible one, given that he is an economist). An important point to make about declining productivity: If your productivity is actually declining, but your wages are staying the same, you are being increasingly overpaid. The “declining productivity” idea doesn’t entirely sit well though based on anecdotal evidence. As you point out, Doctors, nurses, teachers, and adjunct professors would all dispute the idea they are working less hard than they used to.
There are two ways “declining productivity” can manifest itself when many people themselves feel they are working more and harder, to the point of being overworked. 1) There exists somewhere a semi-hidden/unnoticed group of people who are doing essentially no valuable work but being paid like they are and 2) that a lot the additional/new work being done by the overworked people is essentially useless (“zero marginal product” in econ-parlance) or even net-negative by reducing the quality of the work that is valuable. At the anecdotal level, both of these have obvious examples: for 1) there is the explosion in the number of administrative positions in each of these sectors. For 2) think of your $500 psychiatric evaluation example. Another point in support of 2) being common: work dissatisfaction. Most people, and certainly most educated professionals, are happy to work quite hard if they feel their work is meaningful. “Not worth it anymore” is not the cry of someone under stress and pressure, but someone under stress and pressure and not seeing any commensurate value in the result. It’s an implicit admission of futility and uselessness.
As you point out, “cost disease” is not universal or even close to it. In most other sectors, real costs have declined substantially, and productivity is obviously vastly greater. Even in sectors notorious for bureaucratic bloat and incompetent, horrendous service like airlines and cable TV, the quality per real dollar has greatly improved 1985-present. While it’s entirely arguable that productivity growth has slowed 2005-present vs 1985-2005, there’s no credible argument that productivity has actually decreased collectively.
If declining productivity is real, but not universal and limited to only some sectors (albeit large, important ones), the Occam’s razor conclusion is should be that it is not fundamentally a societal-level problem, i.e. that it’s not about overall human capital or of overall societal expectations and risk tolerances. The Occam’s razor conclusion should be that the problem is an organizational behavior problem. School systems, hospital systems, civic infrastructure, universities simply work in a different way than other sectors, and the way they work is terrible. Organizational behavior problems are mostly about internal incentives, where substantial numbers of people in an organization have a personal incentive that is harmful and contrary to the efficiency and health of the overall organization.
I would bet that if you looked at say, 1965-1982, plus or minus a few years, you would see “Cost Disease”, just in different sectors. What happened to those sectors wasn’t just competition. 1980s deregulation played a major role, but the huge change in corporate organizational behavior was driven by the threat of corporate buyouts. In ordinary economic competition, it takes years and years and years for an upstart to take down an ossified, entrenched competitor. The threat of a corporate buyout utterly changed that dynamic. You didn’t even have to have an obvious competitor trying to take you down. If you were a public company exec overseeing a bloated, inefficient cost structure that produced a crappy return on capital, anyone with a junk bond salesman’s business card could kick you to the curb in two months. For a fictionalized representation of this, go watch the Gordon Gekko “Greed is Good” speech from Wall Street: https://www.youtube.com/watch?v=PF_iorX_MAw . Sure, Gekko is an awful person and a villain, but what he is describing is precisely what an organization with Cost Disease looks like. The real-life Gekkos fixed most of corporate America’s Cost Disease.
That is not to say that what’s necessary to fix medicine or education is to sic Henry Kravis or Carl Icahn on them, but it does mean that the incentive structure at the top needs to change, and there needs to be a much greater focus on how it is these organizations go about operating. Given how much taxpayer money they consume, it’s simply not acceptable for them to demand to be free of political interference and have their experience and professionalism trusted. If the organization fails to improve, the top execs should be fired ostentatiously and publicly humiliated.
IMO, there are two large items coincident with Cost Disease: large number of veto points and time dilation. When large numbers of people have veto power over anything, by law of large numbers some of them will convert their veto power into a rent-seeking extortion racket, and “buying them off” becomes just a cost of doing business. An important principle needs to be that anyone controlling a veto point has to be personally accountable for the results. If you have a veto, you don’t get job security. If you want job security, you can’t have a veto. Time dilation is, on top of being a symptom, actually a part of declining productivity- productivity is output per unit time. If getting something done takes twice as long as it used to, then in the new world half of your work has become waste. It can be hard to explain to someone for whom interest rate math is not an everyday thing, but time is really, really, crushingly expensive. In project finance, an 12-month delay can easily be the difference between a massively successful project and a bankrupt failure even if the end result is identical. In general, bureaucrats and people with a bureaucratic mindset have no conception of the costs they impose just by making people wait around. Anecdotally, I’ve talked with several tech types who have wanted to get into medical fields and the time issue is the central problem. It may take 12-18 months to develop a great new product/method, but it can take 3-4 years to get it through the acquisition process of hospitals and insurance companies, and no startup company can make it that long without revenue. It’s not the tangible costs that annihilate their economics but the time lag. A similar story on a much larger scale could probably be told about new drug development.
For teachers, I don’t know if the non-classroom hours are taken into account? It’s not simply “teach six hours, go home and do nothing” – they have to plan lessons, correct tests, set the next lot of tests, correct homework (if they set homework), figure out if they need to talk to Johnny’s parents at the next parent-teacher meeting about the slippage in his grades, worry about the principal told them “okay, we suspended Jason on your say-so and now his parents have sent us a letter from their lawyer, there’s going to be a board of management meeting about this next week”, etc. as well as all the new box-ticking paperwork that has increased over the years.
So it’s hard to measure productivity when these invisible hours aren’t counted in.
If anything, this would reenforce my point, not counteract it. If teachers are indeed having more work demands placed on them (entirely plausible, school-year length has increased over the years), but the results aren’t improving, this represents an outright reduction in productivity, and the marginal work demanded of teachers is pretty much useless.
A basic, important principle of economics is that workers are not paid by input. They are paid by output. If you work harder but don’t produce more, you don’t get a raise. That’s how it is.
It’s also an argument for expectations changing. Parents in the 1950s assumed that it was their own job to raise their children. Parents in the modern day are more inclined to assume that it’s the teacher’s job to raise their children and the parents are simply the customers paying for the service.
Did everybody take the day off or do you just spend a lot of your work time on blog comments?
The latter, of course, which is often floated as an explanation for wage/productivity divergence in office workers: part of the missing wages is accounted for in time spent dicking around on Facebook.
This theory sounds flippant but honestly it’s the best explanation I’ve heard that reconciles cost disease, productivity/wage divergence, and the missing Keynesian 10-hour work week. David Graeber has written about it a lot.
Did everybody take the day off or do you just spend a lot of your work time on blog comments?
Half day today! All my commenting done virtuously at home on my own time (this is my idea of a wild and crazy fun time for the weekend) 🙂
I’m at work, but currently engaged in porting a codebase of a couple million lines of code from Windows/Xbox/PS4 to Linux. It’s work that involves a lot of “hurry up and wait” in the form of making a two line change, then kicking off a ten minute compile job to see what will break and necessitate the next two line change.
“Oh man, I’m totally gonna reply with that one xkcd comic… that clever bastard!”
It doesn’t explain all of the cost (or even most of it) but for construction, there’s been a sea change in the safety culture that’s driving part of the increase.
For example, for the construction of Hoover Dam, here’s a list of the fatalities that occurred in 1933 on that project. Notice that in the first quarter of that year, they were killing a guy every two weeks on average. Some of them are repetitive accidents (a guy gets killed in a fall on 1 Jan, then a month later they have another fall, workers struck by trucks a month apart, etc.) On a modern USACE or Reclamation dam construction project, that would be considered appalling and the accident pattern you see in that list could very well end up with the contractor being thrown off the project and their contract terminated. Certainly work would be stopped until they figured out how to stop the carnage. The safety plan that needs to be submitted prior to starting work now is of the same type of brain melting Lovecraftesqe bureaucratese that Deisach details above. There’s a lot of money spent by both the contractor and the Government in ensuring that that the submitted plan meets the contract requirements, not to mention the cost of enforcing (direct in the purchase of PPE, and the fact that it makes things take longer). This is leaving aside that there might be significant costs to “safety” requirements that may not actually improve safety but are expensive to comply with nonetheless.
From a wider view, these huge projects were often done with the Facebook mentality of “move fast and break things.” My go-to example is The Dalles Dam. The largest group impacted were the Native Americans, who protested that they had Celilo Falls as an important site for catching salmon, one of the Four Foods, and the nearby Celilo Village, which was the oldest inhabited site in North America and an important meeting site for trade and culture.
They got told to take a lot of pictures, because all of that is going to be underwater.
Now, I actually agree with the construction of the dam, in the improvements to power production and navigation were critical. But you can’t escape that a lot of the infrastructure we have was built without much consideration to these types of issues. People wonder why we can’t build high speed rail in the Northeast Corridor when we had an extensive rail system built there 70 years ago and new highways built 50 years ago? Because those things got built by drawing straight lines between population centers, and telling everybody who lived on that line “fuck you.” For better or worse, that’s not possible anymore. So now we spend years beating decisions to death, and people get paid a lot of money to argue back and forth beating those decisions to death.
This is definitely a good explanation of why China can build infrastructure cheaply and efficiently, but the US can’t. But I don’t think it does a good job of explaining why Japan and France (where labor and environmental regulations are much more comparable to the US, if not more stringent) can do it so much more cheaply.
William Mulholland built the Hollywood Reservoir under the famous Hollywood sign in the early 1920s in about 18 months. In contrast, when the pedestrian walkway around the lake was damaged by the 2005 rains, it took until 2013 to get it fixed.
Here are pictures:
http://isteve.blogspot.com/2014/03/why-are-infrastructure-projects-so-slow.html
On the other hand, Mulholland’s next dam, the St. Francis, collapsed in 1928, killing 600 people. So in the early 1930s, a huge pile of dirt was laboriously pushed in front of the Hollywood dam to keep Hollywood from being washed away.
So maybe some times we have good reasons for doing things more slowly now?
Oh, I’m not advocating going back to building dams with the philosophy of “moving fast and breaking things.” When I said that it’d be appalling to kill somebody every two weeks on a construction project, I was including my own feelings in that.
As you pointed out, this is only a small part of the explanation. If there was one obvious cause, I think it’d jump out of you when you looked at the stats. As it is, I think it’s a combination of small factors adding up (as others have stated elsewhere in the thread). Some other small items that might be adding up: We’ve got a much better understanding of structural behavior, which requires more analysis. For example, we know a lot more about how earthquakes affect structures, and the design requires a lot more effort now; it’s just really complex to model seismic loading, and for large structures it costs quite a bit of money; for small structures you just use a simple method that will overdesign, but the big projects are worth a more extensive effort that adds a few percent to your design cost. Then, construction is more expensive because of the more complex detailing of e.g., connections. To touch on that a little bit, the design of connections is much better now–for example, old structures will have welds from three directions intersecting, because they didn’t realize how bad an idea that is–but many of these improvements require more extensive design, fabrication, and construction effort. For example, avoiding intersecting welds requires weld access holes which are fiddly to construct properly, but you also don’t have welds failing without warning, especially in fatigue conditions.
This is in addition to the featherbedding, permitting, etc., noted elsewhere, but it all adds up.
Any thoughts on the huge Oroville Dam north of Sacramento in Gold Rush country that will likely need hundreds of millions of dollars in repairs after the concrete spillway pretty much exploded under heavy (but not overwhelming) outflow?
I don’t know if there are any general lessons to be learned. But the body language of the public works engineers suggests some guilt and fear on their part. California was pretty broke a half decade ago, but could have afforded to fix the spillway recently if flaws in it had been discovered, but the staffers didn’t seem to inspect it terribly closely.
I wonder if the quality of public works engineers has declined as we’ve moved from the construction to the maintenance era. Fifty to 100 years ago, building dams was a highly prestigious profession. But few dams have been built in this century and mostly we just want the ones that we already have not to collapse. That’s not particularly attractive to top people looking for a career.
$100 million now seems cheap to replace an entire spillway. I think living in the NYC area (where $1M is the price of a parking spot) has completely desensitized me to these kinds of costs; if it’s less than $1B I figure they got off easy.
I’m going to preface this with I have no inside knowledge about Oroville or access to the designs. The only thing I’m basing this off of is public news stories, which could easily be missing important information, and discussions with a more experienced structural engineer.
My best guess was a problem with the foundation drains. Them being stopped completely could cause issues, but the most likely here is that the filter on the drains blew out, allowing piping to occur. You need to drain the foundation under dams and appurtenant structures, because the water pressure due to the forebay will exert a large uplift. By using drains, you reduce the pressure at the drains to atmosphere, but water is necessarily flowing out.
Water flowing through soil can be modeled as flowing through little pipes. Just like in actual pipes, given a specific pressure smaller pore sizes will result in slower flows, and larger pores will result in faster flows. (Longer pipes of the same size will flow slower than shorter) A soil particle of a given size will be displaced under a certain critical flow; small particles take slow flows and larger particles faster. So what can happen is that water is flowing through the soil fast enough to displace small grains. Now that the small grains are gone, the pores are a little larger so water flows faster. The faster water can then displace larger grains, which increases the pore size again, until it opens a large hole which flows fast enough to wash away all the soil. This mode of failure is called piping. It’s worth noting that this can happen fast. The Teton Dam failure in Idaho had the first clear indication of piping–muddy water flowing at the right groin of the dam–noticed at about 07:30; the dam failed at 11:45, killing 14 people. There were some clear flows coming out of the soil below the dam a few days prior to the failure, but those aren’t all that uncommon at a dam and need to be monitored, but aren’t usually a big concern without seeing other evidence (such as cloudy water indicating soil mobilization).
So, to go back to Oroville, you need to have drains, but water flowing in the drains can cause the soil to pipe and wash away. So you use a filter material around the drains to prevent the small grains from being washed out. If this filter fails, the soil gets washed out through the drain and opens a void, eventually the void is big enough that part of the slab above collapses.
One of the big issues in designing a spillway is the dissipation of energy–the energy of the flowing water is extremely violent and if you don’t control what happens to it, it’ll eat everything. If I’m right about the drains allowing a slab to partially collapse, this then created a huge irregularity in the flow of the spillway, and churning water then started dissipating energy right there, which eroded the concrete and the soil underneath it. (Once the hole started opening up, there’s absolutely no way to stop it from getting bigger short of simply to stop using the spillway; anything to block or redirect the flow would just make the problem worse.)
An alternate possibility, that I like less after sleeping on it, is that the foundation drains were blocked, causing the slab to heave upwards a bit and creating a flow irregularity that started erosion.
There are other possibilities, but I think these are the most likely. I haven’t had an opportunity to talk to a hydraulic engineer to shoot the breeze about what they think, so they could also come up with something from their side.
As far as the state engineers seeming embarrassed, that’s easily explainable by the fact that they have a gaping hole in their spillway. Even if they didn’t actually do anything wrong, until they know for sure what happened they’ve got to be asking themselves, “Did I miss something?”
I’ll admit, that I could see myself missing the problem here. You don’t have enough time or money to inspect everything, so you focus on where you think problems will occur. I’d have expected problems to happen near the gates, where you have large loads on the piers from the gates and where water is first flowing in and will be churning a lot, or near the end of the spillway where the flip buckets would be susceptible to a lot of erosion. I’d have only given the chute, where water is mostly flowing in a sheet, the once-over to see if anything jumped out at me, but probably wouldn’t have spent a lot of time on it.
Edit: I forgot to engage your last musing, about the quality of engineers in government service declining on average. I think you’re probably right. Not only because of maintenance being less sexy, but also the bureaucratic hassle of modern construction. During the era of major dam construction, those guys were limited by their budget, their imagination, and the laws of physics; they could go “Look on my works, ye mighty, and despair!” Now you spend more of your time shepherding everything through interminable processes. Instead of civil engineering, maybe they go work for a tech company.
Now they’re ordering an evacuation because they are in danger of losing the pool. I’ve not watched any video, only read news stories, but to take your word for it that they displayed body language showing “guilt or fear”: if they had an inkling that headcutting or undermining of the auxiliary spillway was possible and didn’t tell the public that days ago, that’s appalling and people should be fired. They also should have been releasing water through the main spillway at full tilt in previous days, and accepted the loss of the chute, since there was no danger to the main spillway monoliths.
Thank you, CatCube.
Steve
A water main built by William Mulholland (and fictionalized in the movie “Chinatown”) in 1915 that runs near my house in the San Fernando Valley is being augmented by a parallel water main a half mile away. The original was built by men and mules in about 18 months. The replacement water main project has been underway for approaching ten years now.
The street near my house has been used to park enormous earthmoving equipment for several years now. The Department of Water and Power evidently doesn’t want to pay over time for workers, so the millions of dollars of machinery are only used a maximum of 40 out of the 168 hours in each week.
Could this be part of Baumol’s Cost Disease: as we get richer, perhaps we use capital equipment such as steamshovels less intensively?
I imagine the first ever steamshovel was probably used close to around the clock. But now we have a lot of them sitting around, so they sit around unused a lot of each week.
Intuitively, using the steam shovel 120 hours/week instead of 40 means that you get three times as much work done each week but have to replace three times as many worn-out steam shovels every year at the steady-state equilibrium. So the cost savings would be limited to the time-value-of-money gain at startup (or incremental expansion), when you get some number of years of “free” use out of the lot of brand-new steam shovels you just bought and of which none will wear out for X years even used around the clock.
Not sure what the value of X is for steam shovels. For airliners it seems to be about 20 years, and the airline industry does use airliners pretty much around the clock (~50% actual flying time, plus turnaround and maintenance), so there’s obviously gain to be had. I’m skeptical that uncaptured duty-cycle gain is going to be a big part of the cost disease.
Other factors to consider: labor is pretty much the overriding factor in construction these days, where in past years it was materials/capital. Aircraft have a much higher unit cost/operator salary ratio than a HYEX. Spending the money on salaries to keep a piece of construction equipment running around the clock may not pay in the same way that keeping a several-hundred-million dollar aircraft moving would.
The logistics of keeping them running are also different. Two or three shifts on a construction project get expensive quickly. Night shifts are less safe, due to less light and workers operating outside of their circadian rhythm. You’ve also got to have owner’s quality assurance reps on site for three shifts or the dumb nonsense they’re looking out for will just happen at night. Pulling three shifts (or even one shift at night) doesn’t usually pencil out except in exceptional circumstances. Most owners won’t pay 1.5× the cost to have the job completed in 1/3 the time in most cases.Since the job won’t be operating for 24 hours, you’d have to shift the equipment to other jobs to keep it fully utilized. The costs of moving it would eat the advantages pretty quickly.
For some capital goods, though, the physical depreciation of the machine is a strict function of time rather than intensity of use…especially if they are letting their machines sit outside in the rain and cold all the time. Oh wait, you’re talking about California. Nevermind.
I assume I’m missing something here because you seem smarter than me, but:
If the value of the labor of a steam shovel is substantially greater than the cost of the steam shovel, doesn’t that make it worth it to run it 24/7? If my steam shovel costs $1 but it can finish $1M projects in a day as opposed to a year… wouldn’t you work that thing to death and just eat the $1 every time it wears out?
Is it that the actual numbers involved are not that extreme, or is the some other factor I’m not considering?
If it’s only $1, wouldn’t I buy one steam shovel per work site and as soon as the workers say “now we’re off-shift and you have to pay me $30/hr instead of 20 to run the steam shovel”, send them home and leave the steam shovel idle?
$30/hour is still really low compared to getting $1M twice as fast. Is that what you meant by time-value of money? My understanding of that is that when comparing two equal paydays, the earlier one is better. But this isn’t just about getting a payday earlier, you’re getting more total paydays because you can do more projects each year.
One can also get that million dollars twice as fast by buying two steam shovels and running them both single-shift on $20/hour labor. Two steam shovels cost twice as much as one, but they wear out half as fast so that’s a wash in the long run – number of steam shovels worn out per construction project is essentially a constant.
“Time value of money” comes from the fact that you pay for steam shovels up front and the savings for not having to replace them as often don’t materialize until later. That’s not a trivial matter. But compared to paying your work force (and it will be more than just one shovel operator) 50% extra to work night shift?
Whoa.
I’m perplexed that people are perplexed?
1. massive wealth transfers to the elderly.
2. demographics.
3. Expensive but unmeasured quality of life increases such as household size.
4. Keeping high trust machinery running in a low trust post-civilization can be arbitrarily expensive.
5. Related to 4 and exacerbating 2: decreasing number of people working in wealth producing jobs.
6. mandated quality increases, housing and car safety, etc.
7. It is illegal to keep wealth destroying people away from fragile things. Babysitting while pretending it isn’t, related to 5, can be arbitrarily expensive depending on how much you want to dress it up.
Someone might respond to all of this that per worker productivity is increasing, but my feeling is that if you teleported an economist from the past to today and had them look at how we are calculating per worker productivity they’d scream and run.
“But it is hard to credit school spending for the minority students’ improvement, which occurred almost entirely during the period from 1975-1985.”
The National Diffusion Network began in 1974. The increase might be attributable to getting teachers to use evidence-based approaches to education more frequently.
Your comments on increased medical bureaucracy and record-keeping are mirrored in my experience. I work in aftermarket fleet support at a major airline manufacturer. The project I’ve spent the past three and a half months on has been the result of a single finding on one airplane (out of two that were inspected for this problem). The total cost of the engineering work on it so far has probably been at least $200,000 in salaries alone. I expect we’ll see another $100,000 or so spent by us (the manufacturer) before this is all over. That doesn’t count the cost to the operators, who are going to have a new set of inspections to do every couple of years. All before we have any clue if this is a widespread problem or the result of a few hard landings by the particular plane (and there is some reason to suspect that might be the case).
Looking over documents we produced 20+ years ago, they’re models of simplicity compared to what we do now. Half as many pages, sometimes less. Some of this may be computers letting us do more stuff than we used to be able to, some of it is that more airplanes are ending up overseas and in the hands of operators who have to be nagged to take care of them. But a lot of it is increasing paranoia about safety on our end. And I still can’t figure out why we went from ‘left side shown, right side similar’ to separate figures for each side.
Just to be clear, we take safety very seriously, and flying is very safe. But as I routinely point out, if all we cared about was not crashing, we’d ground all of our airplanes immediately. This is very much a minority opinion, and nobody has yet pointed out that we might actually be costing net lives that way. Tradeoffs are not in most people’s thinking.
The same mentality exists in the nuclear world. The goal of the NRC and all the major players seems to be to kill (-2) people per year. As one would expect given its impossibility, this goal leads to major increases in cost.
Control systems. If something is spiraling out of control despite efforts to curb it, likely it’s because something else is under very strong control and causing the spiral. Look toward what is *not* changing to explain what is. 40 hour work week? The quantity we base inflation numbers on? Zero sum status? Start from surprisingly unchanged quantities.
You can consider me one of those people. I had only identified the education cost disease, and had attributed it to the government loan program. I figured that students would pay whatever they could to go to the college they wanted, and discounted how expensive the loans are, but the picture looks more complicated. I’m not in favor of free education at the current rate, but I would consider it at the old rates.
For me, this generalizes. I am against government on moral grounds, but it would be really difficult for me to muster up the motivation to argue against a government with low taxes, excellent services, and minimal laws/regulations.
My theory is that there has been scope creep for these industries that is justified by a kind or moral blackmail against their core mission.
Tell people they should pay more so that a university can invite Milo and resemble a resort, and people won’t do it. Tell people they need to support “education” and they’re all in. If you’re a person who wanted to run a resort, what’s a better way to ensure a steady stream of income? Tie your fortunes to the vagaries of the market? Or to people’s commitment to spend money on education? Why not attach yourself to an educational institution and assert that what you do is necessary to prevent the next generation from falling behind?
I think this scope creep is also partly explained by the trend of the last several years of double-income households. Maybe people wouldn’t pay $5000 for the delta in education test scores from 1965 to today. But maybe they would accept the freedom to pursue a career, earn their own paycheck, and leave their marriage without plunging their family into financial peril. So the schools end up doing things that stay-at-home parents used to do.
—
Reading the anecdotes about private rooms in hospitals, it strikes me that this is the opposite of the trend we’re seeing in airline travel. There, the complaint is that the “base price” pays for less and less, and you need to pay extra to check a bag, carry a bag, have a reasonable amount of leg room, etc.
Analysis blames the customers — we’re not willing to pay for extras, and airlines have responded.
Maybe the difference is price transparency — we see the numbers, and feel like paying extra for more comfort is an indulgence.
I’ve thought about these issues. I agree that in some sectors (particularly education and transit), this is a real issue.
But overall I don’t think this theory holds up. I’m going to use some basic identities to Feynman this theory.
Stylized fact 1: Income has remained flat after adjusting for chained CPI.
Stylized fact 2: costs have gone way up adjusting for chained CPI.
Conclusion: Consumption must have gone down, since consumption = Income/Cost.
The problem is that this simply hasn’t happened. People live in bigger homes. They visit the doctor more, and have all sorts of medical procedures that didn’t exist in the past. More people are attending college and receiving degrees than ever before. People just have more of everything.
If you want to argue with this point, I challenge you: find a major category of consumption that has gone down in material terms over a long period of time. The most I can think of is obsolete goods like land lines, or inferior goods like homed cooked meals (relative to restaurant meals).
That’s exactly the opposite of what this theory predicts! Therefore, although I can’t identify exactly where things go wrong, they must go wrong somewhere.
I will suggest a variety of possible explanations. First, chained CPI is not the same thing as long term inflation because the basket of goods changes over time.
I.e., year 0: food costs $1, medicine costs $1.
Year 1: Food costs $1.00, medicine costs $1.00. 10% inflation. Also “super medicine” is invented (costing $1) and is added to the basket of goods.
Year 2: Food costs $1, medicine costs $1, super medicine costs $1.60. Inflation (over year 2) is 20%.
So chained CPI says inflation from year 0 to year 2 is 10%. But in reality, prices of goods which existed in year 0 have remained the same. This is exactly what has happened in medicine.
A second explanation is that most of the major categories of goods which experience large cost increases are not hedonically adjusted. Medicine is a great example. I spent 1,40,000 rupees on spine surgery, but I’d have been happy to pay $140,000 for it. In 1970 I’d have spent $100 on painkillers and been told I’ll stay in bed for the rest of my life. This is NOT inflation – I’m spending a lot more money for a much better thing. But CPI treats this as 140,000% inflation.
In short, I can’t identify where this post goes wrong. But simple arithmetic identities do demonstrate that it does go wrong somewhere.
Income has gone up. It’s just median income that hasn’t.
Also there have been large decreases in the relative costs of many other types of goods (like clothing, manufactured goods, etc). Basically anything made in a sweatshop overseas or by robots in a factory.
If you’re right, you should be able to come up with some goods or services that the median consumer has less of. What are those goods or services?
Would you be willing to trade in the 2005 basket of goods consumed by the median person (or the 20’th percentile person, or whateveR) in return for the 1975 basket? I sure wouldn’t. Among other things the 2005 basket includes a medical treatment that allows me to walk…
This may be too far down the thread and too long after the post to matter, but …
There are a few things which come to mind which would need to be taken into account in an analysis of your sort. 1) In PK-12 education most people have their kids in public school so that cost doesn’t come from their income. 2) In health care most of most people’s bills are paid by employer provided health insurance which also mostly doesn’t come from their income; at least the way we measure it. 3) Supposedly there is an increase lately in adult children living with their parents. This may be evidence of a reduction in consumption of housing.
Is tipping culture a similar phenomenon? In the US (particularly in big cities), standard tipping percentages have ratcheted up steadily over time. Everyone wants to be a good tipper, so the new standard becomes their minimum, and so on. How much does that happen in other countries with tipping?
It’s important to distinguish between lifespan and life expectancy. Too often people conflate a life expectancy of, say 42 in some developing country, as meaning most people die before they reach the age of 50. More often, that means lots of babies dying, and people otherwise living to old age. As an average, life expectancy is significantly impacted by anything that reduces infant/child mortality. When large numbers of infants and children are dying right after birth, even small improvements in neonatal care can dramatically improve life expectancy. (Which explains dramatic gains in life expectancy as nations rising out of poverty devote more resources to neonatal care.) However this does not necessarily increase lifespan. In developed nations with low infant and child mortality, it’s unreasonable to expect the kind of increases in life expectancy we saw when we health improvements were attacking the fat left tail of the distribution. That right tail of life expectancy – old age – remains as elusive as it was in the 1930’s, it seems.
Additionally, the US calculates child mortality differently from other nations (last I checked, we’re more generous in reporting neonatal fatalities as live births than most countries); this could be part of why the US lags behind the rest of the world in life expectancy but does much better on many measures of health CARE outcomes, such as 10-year cancer survival rates (better than most – if not all – countries). Of course, multiple causes are probably at work here, including behavioral issues such as our increased rates of obesity, smoking, and murder/suicide than other countries. Because of the large number of significant confounders – and the nature of the statistic itself – I’m wary of relying heavily on life expectancy to project quality of health care.
But health care might help us understand the underlying question of why costs are increasing in some sectors. For example, rates of asthma, allergy, inflammatory bowel disease, TIIDM, and a host of other diseases are much higher (especially in the US) than they were in the 1960’s. To reverse Scott’s thought experiment, how would the health care systems of the 1960’s handle their populations if they had to treat today’s rates of asthma, obesity, cancer, depression, etc? How much of the increase in health costs is driven by increases in chronic disease?
And on a related note, perhaps when people can buy a new cell phone for $100, they choose to spend their extra money, not on more health care directly, but on unhealthy behaviors, knowing (hoping?) their doctor will be able to treat the resulting sequelae. In other words, could technology improvements be driving changes in behavior, which drives increased costs, as a revealed preference for sloth/irresponsible behaviors that we’re “buying” with more health care spending?
Not sure how this might relate to the other cost diseases discussed in this post. My field is health care and biomedical research.
Summary:
1) Lots of replies trying to explain the phenomenon
2) Occasiona replies trying to rebut the claim that the phenomenon exists.
And 2) gets lost in the noise, even though with 2) there is no need for 1).
There is a strange analogue for this problem in CS, and it goes as follows:
Let’s say we have a slow process somewhere, someone profiles it and finds that the process spends 20% of its time on task A, and 80% on task B.
The person then proceeds to optimize task B, and is able to achieve a 4x speedup.
The process is profiled again, and now we find that it spends 50% of its time on task A and the other 50% on task B.
At first sight some people find this surprising, in the sense that they would expect to see a 4x reduction in task B, and no difference in task A (even though that would not add up to 100%).
A good way to visualize what is happening is to think about what the process can do in 10 ‘time units’.
Before the optimization it was doing:
ABBBB ABBBB (20% Task A; 80% Task B)
After the optimization it is doing:
AB AB AB AB AB (50% Task A; 50 Task B)
So before the process only did 2 “cycles of work” in 10 time units, now does 5 “cycles of work”.
Moving to a cartoonish version of the real world. In 1970 we spent 20% of our economic resources on education and 80% on manufacturing.
In the meanwhile we improved manufacturing efficiency by 4x.
We go and look at how we spend our resources, and to our surprise we find that we now spend 50% on Education and 50% on Manufacturing.
So strangely enough it seems that the economy is behaving a bit like a CPU.
edit: did not reply to an appropriate post, I meant to reply to the post comparing spending to CPU use in a program.
—
The issue with this metaphor is 100% CPU-utilization of a computer program is sort of a rough analogue of a “zero saving” society — money _has_ to go somewhere, the only question is where. But that’s not how people treat money.
Why are efficiency gains eaten by other sectors? There is probably some sort of law operating. What is that law?
(I wrote the CPU comment, found some errors, deleted it, fixed the errors, and re-posted it (it is now below this comment). I was all so quick I never though anyone would find the time to read it, much less reply to it.)
A substantial amount of ‘savings’ are really just ‘lending for others to spend’ (though the causal chain is a bit complex), so money usually does not exit the system. And even though people can save money, they cannot save their own time. So it is a bit like CPU time, and the unemployed are akin to an idle CPU.
As for your (very good) question:
It seems that we used to spend all these resources on some other tasks (lets say manufacturing), and when these resources were no longer needed in manufacturing, we just moved then for other areas, lets say education, where they could be used, even though they added very little marginal value.
So efficiency gains in manufacturing, translate into productivity losses in Education (as new entries add little marginal value).
In CPU land, task A gets more CPU time, but it really has not much use for it, so it does little more than heating the CPU. Perhaps one could argue that would be better to leave the CPU idle then. Or find new useful tasks to do.
When we do these calculations comparing cost to GDP, is there any way to figure out costs in comparison with leveraged GDP — i.e. GDP plus whatever we’ve taken out in loans, without fully subtracting the future debt obligations? Can our costs be explained simply by imagining that people are willing to pay some fixed percentage of their money on hand for coverage, and then realizing that in the US, loans put more money into people’s hands than in other countries?
Maybe the important variable here is just availability of cash: people with lots on hand are willing to spend more than they need for no good reason, and those around them follow along both because prices rise and also because the culture prevents them from opting out. There isn’t much of a counteracting force because of readily available loans and credit put money in everybody’s pocket — money which gets spent recklessly because people aren’t good at handling debt. Everyone sees the spending, nobody sees (or talks about) the loan, and so there’s no social mechanism to encourage thrift.
Another question: is there a way to break down healthcare spending by percentile, and then compare that across countries? Is it very wealthy Americans, very poor Americans, middle-classed Americans, or some combination of these groups which pay more than you’d expect for similarly-wealthy citizens in another developed country?
An anecdote which might say nothing, but which informs the above thoughts: When I got my first big-shot job out of college, I rented a single-room occupancy in the cheapest part of town. I cannot tell you the flak I got for this decision. My parents were terrified I would be the victim of crime. My girlfriend was afraid to visit my place.
I spent a third to a fourth of what many of my class-mates did, and yet, in comparison with the people around me, I spent lavishly. The building I lived in was occupied mainly by Asian immigrants, most of whom spoke little English. One of them was a little old Chinese lady named Lisa who’d lived in the same rent-stabilized apartment for decades. Every time she’d see me with a cut of meat, or vegetables, or a new appliance, she’d ask, “How much?” and no matter what number I gave her, she’d shout “OHHHHH!” and shrink as if burned. It wasn’t an act — she legitimately could not believe someone would pay $7-10 for a loaf of bread. She shopped conscientiously, at farmers markets and Chinese grocery stores, and did not buy luxuries she did not need.
And why was I spending $7-10 on a loaf of bread? Because I had money for the first time, the store was more attractive than my neighborhood bodegas, the bread tasted better, and it looked more attractive. But the most important reason was: what did I care? Money was cheap to me. Spending wasn’t taboo. Why did Lisa find that price obscene? Because money was expensive to her, and a purchase was a serious issue. If her store charged her $7 for bread, she wouldn’t buy it, no matter how much money she had.
My demographic’s willingness to spend more than we strictly needed raised the prices of bars, coffee shops, grocery stores in our city — almost every service, I imagine, was affected. The increases in costs were passed on to the residents of our city, many of whom were neither as wealthy as my demographic, nor as willing to hunt for bargains as people like Lisa. My demographic upped the supply of high-priced goods at the expense of low-priced goods, and, perhaps as importantly, shifted our social circles’ cultures (which included many non-CS college grads) to be more expectant of luxury spending. All the programmers could afford expensive apartments and expensive goods, and it seems to me that all the non-programmers just paid out the nose for those things anyways.
One problem in this discussion is that everyone is using dollar prices. Consider using gold prices instead. To find hte gold-prices of things, take their dollar prices and divide those prices by the price of gold in dollars/oz. What would you find in that case?
1. For starters, it is worth noting that in J. M. Keynes’s day, the price of gold was $20/oz. Now it is $1300/oz., an increase of over 50 times. This will affect our gold-price calculations.
2. The gold-prices of things like electronics, food, automobiles, etc. have declined drastically since World War II. This reflects the productivity increases in these sectors in comparison to the gold mining sector.
3. The gold-prices of things like housing, healthcare, and education have stayed about the same. The productivity increases in these sectors have been commensurable with the productivity increases in the gold mining sector.
4. The gold-prices of wage labor have declined drastically. Workers earn drastically less gold in each week’s paycheck. The reason people do not feel poorer is, the gold-prices of most (but not all) of their necessities have likewise declined. It costs less in gold terms to support a worker’s subsistence…if you disregard things like college, healthcare, etc.
I think the dominant narrative frames the story in the wrong way. It is not that the cost of healthcare, education, etc. have increased, but that the costs in these sectors have failed to decline (when measured in gold terms), which, in conjunction with decreasing wages (in gold terms) leads to people being poorer in these areas, even while in other areas where price declines have kept pace with wage declines, people do not feel poorer. The reason we do not perceive things this way is, the depreciation of the dollar against gold has obscured this.
I don’t know about using “gold” as some sort of standard measure. The reason gold was $20/oz in the 30s is that it was dictated to be that price. The definition of a dollar was 1/20th of an ounce of gold. Once that link is broken, talking about how much “gold” somebody earns today is pretty much meaningless, since the 1/20th of an ounce of gold as a dollar was pretty meaningless before; it was set by fiat, not some rational process.
Under the old system when the US Treasury and the Bank of England pegged the price of gold at $20 the price was probably above the market clearing price. You can tell this because government gold holding kept increasing. That implied there was not sufficient private demand at $20 to clear the market, leaving a surplus for the government to add to its stocks.
Short List of Other possible reasons – Short concepts not fulling fleshed ideas.
One There is is the economic equivalent of parasites/parasitical behavior that has suddenly grown massively, probably due to the massive increase in available food for it. In which case the equivalent of deworming a cat is required.
Two: Given that the modern economy is relatively new and by the time scales of most civilizations, isn’t even in puberty yet (Industrial civilization from the 18th Century isn’t that old either) and there are completely unexpected problems that we don’t even recognize. Might not have the right metrics, theories and examples of other similar economies to really understand what’s happening.
E.g. We known Communism is a failed mode for running a modern Industrial economy, but that doesn’t mean the same trap won’t hit modern capitalism (Civilization Teething)
Three: There’s a hidden cost or rising burden on the economy that is manifesting as an increasing cost in these sectors for whatever reason. Thinking of limit to growths idea of the rising costs of pollution eventually overcoming any growth in at least one model (would need to reread for proper explanation).
Does pollution need to be a physical thing like C02 or arsenic for this model to make sense?
Also for this idea, pollution could be something we think is positive but does burden the economy.
Four: The rapid change in technology for some areas is increasing inefficiency and decreasing the ability to become more efficient over time, but this is an overall trend that is hidden by its timescale (decades of tiny changes).
Think of a column by Shamus Young (Escapist) where he talked about video game artists not being able to truly learn their tools because they change to quickly and wondering would this impact process that require paperwork since document management systems changing would initially decrease effectiveness and if changed to quickly the increase in efficiency/productivity won’t make up for the initial drop.
Five: A really weird mixture of all the possibilities, since non is obvious.
Follow up – the cost disease is also a possible explanation for how you get articles about American doing fine and good economic indicators, yet Trump won the presidency on a campaign that is all about the opposite.
Due to these massive rises in the cost of the things that matter deeply to people & the chance they could begin crowding out other expenses, all those good economic metrics are showing a very weak countertrend that is not overcoming the cost rises.
i.e. The costs & downsides are growing faster than the profits & benefits.
It may be as simple as: look for areas where demand is subsidized and supply is constrained.
In my opinion, a large role government is to support competition…always and everywhere. The goal of business is always to create competitive barriers. When that occurs via productive ways (a better product at a lower price) everyone benefits. When that occurs many other ways (regulatory barriers, monopoly power, bribes) there is no such gain. There is a role for government to play in helping to tear-down these non-productive barriers as a means of keeping markets properly functioning (i.e., productive). Today’s government seems more interested subsidizing demand where non-market forces have pushed prices up…which is of course a recipe for further inflation and further non-productive barriers (since the rewards to such go up).
Three completely unrelated comments:
1) Did the graphs on the ‘salary’ costs for Public School teachers, and Nurses and Professors include the cost of their benefits? If it doesn’t then the numbers are very misleading — I have no doubt that total benefits have been rising much faster than baseline salaries (partly of course because a very considerable part of the ‘benefits’ are insurance for the 10x rising health care costs). As you mentioned public pensions have dramatically increased as well and those costs should also be included in the ‘total compensation’ if you want anything like an accurate picture of how real compensation levels for teacher/nurses/doctors/professors have changed (or not) over time.
2) FYI, the Khan academy is more a replacement for K-12 education than a replacement for college. I’m sure there are great places to get on-line degrees, but that’s not what the Khan Academy is doing. Their interactive math problem sets start right down in 2nd grade type things: [here are 3 ducks (shows pictures of 3 ducks paddling around) and here are 2 more ducks (ditto) how many ducks are there altogether…] One of the most fun things I have done in recent years was to get myself an account there (it’s free!) and work my way through the entire series starting about in the 4th grade (geometry!) and up through calculus. They haven’t quite gotten it to the point where it is as fun as playing a video game, but it is close. A big part of it is immediate feedback. You figure out your answer, and type it in and bang! right away you get feedback, right or wrong, plus suggestions and help links to figure out what you did wrong. I don’t think the Khan academy is a superior education to a public school for things like English or History, at least not yet. But their math problem sets beats convention education flat. The fact that it is also free is just a side bennie. As soon as they implement comparable quality interactive learning/computer problem sets for History and Chemistry and English, anyone will be able to get a better K-12 education on-line than in any public school in the world.
3) If you’d like to know more about what publicly funded hospitals in India are like, I would strongly recommend “Better” a book by Atul Gawande. One of the chapters describes a month he spent volunteering in public hospitals in India, and all the things he saw when he was there. It was amazingly interesting. The degree of soviet style inefficiency on display was simply staggering (brand new MRI machines but no basic supplies, not even sterile bandages; the families of patients admitted to the hospital had to go out and buy their medical supplies in little hole in the wall stores clustered around each hospital; they even have to go out and buy the surgical supplies needed before the surgeon can operate.) It was simply mind boggling. Lots of stories of astonishing dedication and competency by individual physicians and nurse, coupled with malfeasance and/or corruption on a truly industrial scale by far away bureaucrats in the heath ministries.
You mentioned housing at the start, but didn’t follow up on it. That’s a case where the cause of increased costs is fairly unambiguous: There’s been a proliferation of laws enacted at the local level that restrict the elasticity of the housing supply. Nolan Gray jokingly calls it The War on Affordable Housing.
I was really interested in this topic, so I did three podcast episodes about it:
1) Trailer Parks, Zoning, and Market Urbanism with Nolan Gray
2) New York Urbanism with Stephen Smith
3) How Land Use Restrictions Make Housing Unaffordable with Emily Hamilton
The problem of skyrocketing housing prices is virtually non-existent in Japan, despite also having high population density and being very well developed. Seems to mostly be related to various regulatory burdens.
One thought that applies to medicine, but maybe not to other things, is the fact that better technology has a tendency to transform acute and fatal conditions into expensive, chronic conditions. For example, you’re shelling out thousands of dollars a week on hospital “frequent flier” who would, in a society with less advanced medical technology would have died from something or other after their second or third trip to the emergency room.
Another likely factor is the tendency for expensive medical treatments to give you diminishing returns as you get close to the end of life. It’s pretty well known that you’re incurring a significant fraction of your medical expenses during your last year of life, but these medical expenses (by definition) cannot add more than a year to your life.
And with modern educational technology, you are able to transfer a likely college drop-out into a six year cartography program.
Obviously this blue part here is the land.
The effect you’re pointing out is real, and a definite cause of specious claims like “US infant mortality rates are worse than Cuba!”, because we can take an infant who would be written off as a stillbirth in a less-advanced country and keep them alive for a little while longer.
But it doesn’t explain why US medical costs are so much higher than countries with equally-advanced medical systems and similar-or-better life expectancies and end-of-life care, such as Canada and Japan.
Costs and life expectancy don’t have to be synchronized. Scott Atlas argues that the lower life expectancy is not an indictment of healthcare quality, but rather lifestyle choices: more car accidents, more in-vitros and more obesity in the US.
When controlling for those, the US healthcare and life expectancy comes out on top (so those factors are impactful).
They have price controls, we don’t, simple as that.
I’d prefer that if we need to subsidize the world’s R&D that we just do that directly, and pick up the same price controls, rather than let prices float ever upwards in a dysfunctional vague approximation of a market. For drugs, at least, if we did that we’d save a lot of money overall, don’t know about other facets of medical innovation.
Here’s a “model” that I came up with, which means it’s an entirely amateur model. It also smells of “lump of labor,” but bear with me. It appears to my untrained eye that virtually all industries are less labor intensive than they used to be. But the extent of this decimation of jobs varies between sector. So we have Facebook-type entities operating almost as “lights out” operations, with werehousing and manufacturing not far behind in level of automation. As recently as the 1990’s the so-called service sector was supposed to absorb the surplus labor, but there you have inventions like the “call center” based on the realization that phone answerers and entities on whose behalf phones are answered can be modeled as a many-to-many relation. That and many other “efficiencies” hollow out service sector employment. At this point the health care professions are the only college degrees that open any doors of opportunity whatsoever. America today is very much like Vonnegut’s dystopian novel Player Piano except the last dominoes standing are nurses and other allied health professionals, not engineers and managers as in the novel. Basically, you have the combined wages from all the industries other than health care (or education, if you prefer) as the pool of money out of which the wages in health care get paid. To the extent that health care sector is a larger slice, not necessarily of the GDP pie, but of the workfarce pie, the bigger burden it is on the overall workforce.
Perhaps, in the minds of people deciding on a career, the fact that health care skills are the only skills really in demand counts for more than the fact that the health care professions are facing record levels of stress, burnout and downward pressure on wages. All the other sectors offer the same dysmenities, only more so. So everyone wants a marketable degree, which is to say, a degree in a human services field. The less outsourceable job is the one in service to the human body, after all. So now practice of occupational therapy requires a master’s degree, and physical therapy requires a doctorate. Applicant/opening ratio in the MSOT program at my local public university currently runs 10:1, which I believe is probably the main driver of the mushrooming in for-profit schools with curriculum limited to business/IT/healthcare (in some cases just healthcare) and tuition levels well above traditional private universities. I think of them (speculatively of course) as today’s version of the Caribbean medical school circa 1983. (The “Baby Doc College of Physicians and Surgeons” for those who were into Doonesbury back then). Where is the money going? You seem to have demonstrated, with your graphs, “not into profits.” But profits are after expenditures, expenditures includes salaries, and salaries includes CEO salaries. Also, some of the expenditures of the big companies go to the posted profits (and also executive salaries) of first, second, third tier suppliers of the hospitals, universities, etc. sending consumers and insurers the six-digit bills. So largely, the money, maybe is going into executive salaries. Since you like time series graphs, it might be fun to take a peek at which way that’s trending.
Totally stealing this.
A couple threads down (re: AI) people are debating “technological unemployment” which touches on this. Old-school econ basically claims a society can’t ever run out of work to do, but people are coming around to the idea that with big enough productivity it may not be possible for demand to keep up to full employment. This gets shorthanded as “robots”, both because of technological productivity drivers and also because of a limit case: human-level AI would be sufficient to outcompete humans for every job, with actual robots if you will.
Full unemployment has been a moving goalpost for a long time. That’s why they invented NAIRU. I suspect that full unemployment (running out of work to do) will be a similarly elusive goal, should we decide to go for it. More likely we don’t commit to either goal and continue the frustrations both of those who can’t get out of mandatory overtime, and those who can’t break into full time hours.
For those interested in healthcare cost analysis, I recommend the TIE FAQ series on the subject. (The link there to the main FAQ index also has lots of other good information).
Spoiler: it’s everything – not just end-of-life care, or FDA regs & patent policy, or fat, etc, they all play a part, with no particular driver of increased cost being much larger than the others.
There’s an interesting idea I saw somewhere to break the employer-credentialism cycle keeping college supply low – company schools.
The thought experiment: Amazon does for college what they did for clouds. They start an Amazon U for their employees to use. Obviously Amazon themselves accept degrees from there since that’s the whole raisin d’entree. It gets good results cheap, because they can. Then it gets opened up to outsiders, then other employers start accepting the degrees because Amazon did and didn’t fall apart.
Which may then get the problem of “We want to hire someone with a degree from Amazon U because that’s a valuable degree and everyone else in the field is doing it; your degree from Redbrick U (formerly the Polytechnic of Redbrick) isn’t any good to us”.
I do think a large part of the elite university degree chasing is because companies consider “this applicant studied under the leaders in their fields, they made connections, they know the up-and-comers, they’re plugged into the network and besides, all our management went there in their day” whereas a degree from Cornfield University may technically be as good, but whoever heard of Professor Smith (your tutor) versus Nick Bostrom of Oxford?
It’s the same reasoning I saw about why are all the tech hubs in California – sure, a city in the Rust Belt could spend a lot of money setting itself up as a tech hub but it won’t attract in the kinds of successful start-ups it wants; they’re all headed for the Bay Area and Silicon Valley because that’s where the big fish are, and if they want access to the venture capitalists and opportunities to grab attention of one of the big fish, that’s where they have to go. Technically, in a wired and interconnected global environment, there’s no reason you can’t transform Milltown from “former paper milling town” to “tech hub for the region” but in practice it’s not going to work out that easily.
https://en.wikipedia.org/wiki/Economies_of_agglomeration
One note on veterinary costs/regulation – while it may not be as extensive as regulation in the human medical treatment field, vets are increasingly roped in to the prescription/regulatory system. A friend of mine has to get a prescription to buy her cat’s food.
This is a good post but I think there are reasons to be more optimistic. As bad as the increasing costs are for medical care and education, your phone example shows concurrent gigantic increases in standard of living. I don’t believe inflation rates accurately reflect product innovation – consider how much someone would pay for a “new” five year old model iPhone (or TV or computer or speaker system. . .) This does make increasing costs in other sectors look even worse, but it shows how bad inflation is as a measure of anything aside from the price of peanut butter over time. Innovation can find a way around these bloated sectors. Uber / autonomous cars can make the subway system moot, deep learning could greatly improve / cheapen medical diagnosis, better credentialing could finally get online education mainstream, and Airbnb could make a dent on housing costs. I think that markets are finally at the point of getting around cost disease / regulatory roadblocks in these areas.
Also even though I suspect that regulatory burden and poor incentives from regulations are behind all of your examples, I think that housing is the case with the most empirical support (consider e.g. SF v. Houston). I think maybe you should not have included it in fairness to the other examples. Other things that may fit better: legal services and sports.
Lastly, as a libertarian who cares about poor people, I object to tax and spend policies because of deadweight loss, knowledge problems, bad incentives from regulations leading to counterproductive outcomes, unfair burdens and benefits of taxation, and the difficulty of getting rid of underperforming agencies / bureaucracies. Only the last point seems somewhat related to cost disease.
No they can’t, and for a very simple reason: a car takes up much, much more space than a person (or even two people, or five). The subway remains the most efficient means of transporting large numbers of people through urban space, by a longshot. This image, though it only shows you buses, give you some conception of what I am talking about. This is a matter of simple geometry.
Define “mainstream.” I do think online education probably does, and should, have a role to play in the education of the future, but I also think people who believe that it can be an effective wholesale replacement for schools are failing to see the biases created by their own experiences and personalities.
I like Airbnb, but I don’t really see how it could make a dent in housing costs in a large-scale way. If *everyone* did Airbnb, then the supply would be such that you probably could not generate enough revenue from it to substantially offset housing costs.
Airbnb could make a dent on housing costs
Airbnb is being accused of exactly that – by driving up rents and contributing to the housing shortage because people figure they can make more money as hosts for Airbnb guests than renting out their properties to tenants. Some clever chaps are even figuring out how to turn a profit by renting from a landlord as a conventional tenant and then sub-letting as an Airbnb host!
If your tenant can charge double the rent for the property, why not cut out the middleman and rent that property out to Airbnb guests for the doubled rent yourself? (Never mind the hosts unhappy that their earnings are taxable and that Airbnb are reporting to the Revenue Commissioners – there does tend to be quite a strong strain of Irish landlords not declaring rental income for tax, which has a knock-on effect on tenants’ rights e.g. not providing rent books, which by law they are required to do, since that would be evidence that they are receiving money from rentals. This also means that tenants in a dispute can’t prove they are tenants and that person is their landlord if they have no evidence of paying rent).
Zoning regulations, I imagine.
Two comments: one, regarding why colleges declare a huge cost, then give scholarships to a majority of the students. From the college’s standpoint, they need two things: 1) enough smart students to maintain their prestige (and do the complicated nug work for the professors, at research institutions), and 2) enough tuition income to pay the bills. We might add 1a) enough smart, visible minority students to keep the diversity numbers looking good enough to fend off attacks; 1b) for state-supported schools, enough children of in-state elites to fend off challenges to the state funding; and 1c) for private schools, enough children of elites/alumni to keep the donations flowing.
So, you have two groups of students: the brains and the bucks, which don’t overlap all that much. Colleges need to compete heavily for the brains, especially brains from demographically important sub-groups– these people are enticed with targeted funding, to get ones who could go to higher-ranked schools to instead come to the offering school. The bucks come from less smart kids, who couldn’t get in to higher-ranked schools, and from foreigners. This is a sliding scale, of course– marking up the price, then giving you 10% off to make you feel like you’ve got a good deal, just like on QVC. In larger schools, this is worked out with an explicit mathematical formula, but every selective-admission school does it.
Jumping subjects: I think part of the answer as to “where is the money going” is to retirement funding for the current staff. Before the 1930s, nobody “retired” at all– you worked until you couldn’t anymore, and then your family was supposed to take you in and take care of you. (Anyone without family to take them in was in dire straits.) When social security came into effect, the “retirement” age was greater than the life expectancy– roughly, as if the retirement age today was 80. And SS was considered radical at the time.
But now, we expect to not have to start work until our mid-20s (or later!), stop working by 65 (at the latest), and then live at the same standard of living for the rest of our lives (20+ years), all without a dime of money or a minute of unpaid work from our family. That’s a HUGE “purchase” to attempt to make– a million bucks, if you’re earning $50K a year (more if you live longer). No society has ever even attempted this feat, until now. No wonder the jobs which involve a lot of human workers (education, health care) have gotten vastly more expensive.
This is a very good point.
And pointing to things like “pensions haven’t increased much since the 1950s” misses a lot of it because my understanding is that pensions were just being invented in the 1950s, and were being offered to new workers, and there was no large pool of already-retired workers collecting them for 20+ years.
Considering “retirement” as a new entitlement program that costs on average $1 million per person and is funded by some mixture of the state and the person’s employer would go a long way towards explaining why costs have risen significantly.
And this is not capture by the various hypothetical questions in the OP. So the proper framing would not be “Would you rather live in a house like your parents for half the cost?” but rather “Would you rather live in a house like your parents for half the cost but you enter the workforce at age 18 and you continue working until you keel over and die?”
I would not be surprised if retirement were part of the puzzle here, but I gotta push back against this one a little.
Yes, but you can’t base this off overall life expectancy. You need to look at life expectancy at 60 or 65. Per this table, life expectancy in the US at age 65 was about five years higher in 2010 than in 1950. A meaningful increase, yes, but not “as if the retirement age today was 80.” For most of history, overall life expectancy was 30 or 40, but if you made it to adulthood, you’d probably make it to 60 or so.
I feel like this may have been true if we were having this conversation 40 years ago. Anecdotal, but my impression is that WWII generation were the ones who really got to see the benefit of this. In my family, most folks in my grandparents’ generation were retired by 60; most folks of my parents’ generation (Boomers) worked well into their 60s, at least until 65 in most cases. I believe data backs this up: people actually are working to older ages today than they used to. They are starting later, though.
I agree with the general point that we probably need to rethink the notion of “retirement,” for a variety of reasons. But I’m skeptical that it is the driving force behind cost disease (though it could well be a contributor), and I don’t think the answer is really “let’s go back to the way it was.”
I think the number that really matters is life expectancy at the age of entering the workforce, which your table doesn’t show.
One theory that these prince increases are driven by consulting fees replacing, or supplementing, bribes.
(This was originally an idea I heard from Jim, though I use more polite and less hyperbolic language to describe it.)
Back in the Olden Days, if you wanted to get a building built in the city you’d go visit the Don and hand him a big briefcase of twenties. Then he’d go around to his friends in the unions and the housing board and tell them that getting your building built is a big priority.
Today, if you wanted to get that building built you still might have to give the Don his briefcase. But then you also have to hire an environmental impact consultant to do the same thing with the environmental advocacy groups and the EPA, plus a diversity consultant to do the same thing with the minority advocacy groups and EEOP, plus a permit-expediting consultant to to do the same thing with the housing board, plus a PR consultant to do the same thing with the newspapers… And by the time you’re done paying everyone with their hand out, you’ve paid ten times what the Don ever asked for.
I’ve definitely heard of this happening in education from the teachers in my family, and I’ve seen it happen in the lab. From what I hear medicine and construction are ground-zero for this sort of thing. It might not be the whole story but it seems to play a role.
I think I understand what you mean about construction, but how does this happen in education, medicine, and the lab? Do you just mean that they need buildings, too? But isn’t that a small part of their budget? What else is held hostage? publicity?
It would explain way subways are 20x more expensive, being pure building, and maybe that’s why education is “just” 10x more expensive.
Or maybe it explains 5x of schools getting more expensive and something else is the other 2x.
Building subways probably should get somewhat more expensive, because even if everyone plays ball, it’s going to take work to make sure you aren’t boring through something important, and/or deal with the damage that does occur when you bore through some critical piece of infrastructure that no one knew was buried there 100 years ago. Still not 20x, though.
In the lab, how it works is roughly this:
Let’s say you have a BSL 2+ lab in NYC, where you work with mice as well as human and mouse tissue cultures including primary patient cultures. You sometimes use P32 labelled phosphate in experiments.
Every so often the FDNY will send an inspector into your lab. When he arrives, you need to show him (among other things) your C14 certificates. You get these certificates by hiring an approved third party to put the answers to the C14 test up on a projector, turn the projector off while handing out the test, and then looking over them to make sure they have more than 70% right. He will also check the dates on various tags hanging off of your faucets and the labels attached to your refrigerators. You pay a third party to come in periodically and put new tags and labels on. He will also make sure that you bought the right brand of doors and door labels, the right brand of shelving, the right brand of storage box, etc. You should make sure your EHS department has hired at least one or two ex-FDNY Fire Safety experts to run your own inspections beforehand.
This process, with minor variations, is repeated with the NYC Department of Health, NIH Office of Laboratory Animal Welfare, NIH DOHS, HHS and the NRC. Any one of whom can shut you down if you fail an inspection. There are probably more, but I’m not a PI so I’m insulated from a lot of the behind-the-scenes stuff.
TL;DR: Instead of just slipping someone a cash envelope you pay an order of magnitude or two more a year on trainings or certifications by third parties and buying products at a 10-100x markup from approved vendors. Everything is legal and above-board, and most importantly the people involved all still get their cut, but the process is extremely wasteful.
OK, but how can you tell the difference between corruption and lousy regulation? Can anyone get licensed to administer C14 tests? How do you know that “the people involved” get their cut? (excepting the ex-FDNY) Of course, even if it is open, the middlemen are people who specialize in dealing with the government and lobby for more regulation.
Moreover, if it is all aboveboard, an explicit set of hoops to jump through, then you know what the price is and can plan ahead. My understanding of Jim’s point is not that 10 people with vetos means 10x the bribe, but unlimited bribes. If there is one boss, he asks for a percent of your profit, low enough not to kill the golden goose. If there are 10 veto points sizing you up demanding bribes, it is a prisoners’ dilemma between them and they ask for too much and you don’t build. Whereas NIH has explicit guidelines. The cost requires more money in planning a lab, preventing some labs, but it is a predictable cost so richer labs still happen. This is a qualitative difference.
>OK, but how can you tell the difference between corruption and lousy regulation?
the point is it doesn’t matter. Hell, even if it’s good regulation, too much and it stops being a net positive just by sheer weight of burden.
If you can’t tell and it doesn’t make a difference, don’t say it’s corruption. But we’re not talking about your point; we’re talking about Jim or Dr Dealgood’s. Dealgood seems to say that he can tell. I’m not sure whether he thinks it makes a difference. I detailed in my second paragraph how it could make a difference; I think this argument is from Jim and he thinks it makes a difference.
No, wait, I don’t understand the construction example. If payoffs really had a substantial effect on the price, they would be too big to hide. Do we see environmental consulting fees taking up such a large portion of the budget? I think not. Maybe many administrators are actually bribes. If 10% of the apartments are “affordable,” that could raise prices by 10%, whether they are bribes or not. But if the discounted rent reflects what the price would be otherwise, maybe they don’t raise the other prices at all!
I do think that it is plausible that a large number of independent veto points could result in a high risk, especially risk of delays, requiring a high profit, resulting in high prices, but still the profit flowing to the owner of the real estate.
Do American healthcare consumers pay for the entire world’s healthcare technological progress?
No.
No, we pay significantly *more* than enough to subsidize it all. (TBF I’ve only done the explicit analysis for drugs, which are kind of low-hanging fruit on that axis; maybe medical devices, surgical techniques, etc. are very different, but I doubt it)
All these things are either government-supplied (K-12 education), heavily government subsidized (Medicare) or regulated (health care more generally, higher education, the risk part of home mortgages [Fannie Mae]).
As others upthread, the cost of public employees — public schoolteachers particularly — is very understated because it does not address actual total compensation, including medical benefits and earned pensions, as well as administrative bloat and the role in credentialism in expanding pay in those ranks.
Also upthread: the private sector has its own cost disease, CEO. compensation.
And that’s a poor comparison, because CEO pay has not (as a rule) trickled down to higher costs to the consumer.
I thought there was an explanation for medical costs rising; end of life care.
There’s a study referenced here (Banarto, McClellan, Kagy and Garber, 2004) that says 30% of all medicare costs go to 5% of the population that dies that year.
A counter argument saying that it is a myth.
And I’ve heard some arguments on this that new treatments are being created for little benefit, e.g. a new drug that increases life expectancy by 50%! Which means life expectancy goes from 4 months to 6 months. The cost of developing this would be high, so the company wants to make that money back through high pricing and good marketing. But the end result would have minimal effect on life expectancy rates.
The question is why. And Scott’s arguments above are at a loss. But a combination of my claims re: misalignment of investment for superior goods, where all the money chases small returns – https://medium.com/@davidmanheim/chasing-superior-good-syndrome-vs-baumols-or-scott-s-cost-disease-40327ae87b45 and Robin Hanson’s paper on how overspending is a way to show caring – http://mason.gmu.edu/~rhanson/showcare.pdf I think explains the exact dynamic well.
There are different types of “cost disease”. Baumol cost disease, for example, is different from health care. In health care, people are willing to pay a lot for a small marginal gain. Say dying tomorrow instead of 6 months from now. Because of this and diminishing returns, the health care system expands to deliver small, marginal gains at the cost of consuming a lot of resources.
While the wages of health care workers might not be going up a lot, the number of health care workers certainly is. At this point, health care is basically the largest private employer in pretty much every state. Soon, we will all be working in health care, delivering very small marginal improvements /joking, kinda.
Let’s say that you are someone who is not willing to pay a lot for a small improvement (like me). Now lets say you are mandated to have health insurance. How do you avoid paying a lot?
Thinking a bit more about it, it strikes me that the core of the issue here is not absolute living standards, but the gap between the kind of life we seem to think we should be able to afford and the kind of life we really can afford, which seems to be getting bigger and bigger.
And there isn’t any obvious psychological reason why this would be so: yes, people’s expectations get higher as they get richer and technology advances such that what was once a middle class lifestyle now seems poor and what was once poor now destitute, and so on, but that does nothing to explain why the gap between expectation and reality would grow (if anything, with technology developing exponentially, basing one’s expectations of future growth on the past should, if anything, cause one to underestimate the future, at least as far as technology is concerned).
I wonder if part of it is just that both governments and private corporations have both gotten better and better at finding innovative ways to allow us to leverage future expectations for the purposes of current consumption? Of course, someone has to be paying for all that right now, and I think in the US case it is mostly the third world living beneath their means to buy US treasuries so we can live beyond our means.
Not just national debt and of course, credit card offers seem much more plentiful. That is, keeping up with the Joneses, if the Joneses have subsidized student loans and several credit cards, is a lot harder, unless, of course, you also take on such debts, at which point you are now even with them as compared to the world in which these loans were not available to either of you. But the longer you eat your seed corn the poorer you are getting in reality even as it seems, temporarily, that you are enjoying a good lifestyle.
The result is everyone living beyond their means to the point that attempting to live within one’s means either feels like a ridiculous hardship relative to the lifestyle the culture leads you to expect is normative, or else only attainable for the wealthy.
Isn’t that the logical result if you proudly tell people that your economy grows by a certain percentage, but fail to tell them that most of that money goes to the 1%?
I would think that there are plenty of culture factors that would push people to live beyond their means(Zero sum positional games, the alter upon which all progress is sacrificed) , but I doubt duplicitous economic growth reporting is to blame.
I’m a healthcare actuary, and I have read a lot of theories/evidence about why healthcare costs trend at a much higher rate than inflation. It would take too long to get into all that, but I did want to point out a logical argument about why it’s difficult to compare increased consumption of healthcare services to trends in mortality/life expectancy. A pretty large majority of healthcare services are consumed by a fraction of the population. I believe a quarter of all expenses can be attributed to 1% of the population (from a Kaiser report I read a while back). I don’t have the exact figures, so I’ll make up an example and extrapolate (or don’t, think of the following argument in terms of the first set of figures). Say 10% of population consumes 75% of services. You may be able to increase the lifespan of these particularly sick individuals (where as in the past, technology would not allowed us to help them and they would just die), but overall, you’re not going to see a huge impact on the total population. I would venture to say that a disproportionate share of the high medical cost trends are incurred in attempts at helping these particularly sick people. Because we have insurance, all of this experience (healthcare expenses) is pooled and spread to everyone else when the insurance companies develop premiums for the plans. I would argue that it NEEDS to be this way in some shape or form, obviously we shouldn’t just abandon those that are sick, but the “healthy” consumers are really starting to feel the brunt of this conundrum, and it’s understandable that many are fed up with these high costs.
If you wanted to make medical spending lower, heroic medicine is a tempting target, but the question isn’t why medical spending is so high. The questions are (1) why is it twice as high in America as Europe; and (2) why do both places see such rapid increases in price?
For this to answer to the first question, there would have to be more heroic medicine in America than Europe. I don’t believe this. Medical care is pretty much identical, with every individual component costing twice as much in America. Even if it were true, that’s just a factor of 2. Much scarier is the rapid increase with time. I find it plausible that heroic medicine is increasing with time, but nowhere near fast enough to drive the trend. And maybe it hasn’t increased proportionately at all. Lewis Thomas wrote in the 70s about the previous half-century.
I wonder if it is that American hospital pricing systems are crazy. I was noodling around on an American hospital site and clicked on “how to pay your bill” and was astounded that the hospital was saying “yeah, we’ll charge you for the stay and the procedure, but that’s not the whole of it – other bills will be charged separately, e.g. if you had to have x-rays the radiologist will charge, the surgeon will charge, the anaesthesist etc etc etc”.
To me that seems like walking into a shop and when you go to pay, you’re told “yeah, we’ll charge you for the goods but as well you have to pay separate bills from the cleaners who keep the premises in good order, the delivery drivers who truck the goods to our store, the painters and decorators who make it look so nice…”
Isn’t there any way of “here’s the rate for the operation and it’s all added into one bill so at least there aren’t sixteen different bills from different entities”? That seems to me to drive up your costs if everyone involved is “well, the insurance is paying for it anyway, bung another zero on the end!”
For various reasons, over here specialists like surgeons, radiologists, etc, tend to be independent contractors rather than employees of the hospital. Not only does this result in the billing issue you describe, it also creates the issue of the hospital taking your insurance but the specialist not. Furthermore, if a hospital has, say, multiple anaesthetists who work there and some of them take your insurance but some don’t, there’s often no way to guarantee that you get one of the ones who does, even if you’re scheduling your procedure well in advance.
And part of the reason this is a problem: in addition to insurers paying a higher percentage for “in-network” providers, the real value is that in-network providers have price controls: they will charge only what the insurer allows, and write off the rest. The difference between charge-master and network rates can be a factor of 5 or more; when your insurance says it pays 50% of out-of-network, this may only be 10% if the provider bills at 5x the network rate.
There is no legal nor logical limit to what a provider can bill you in the absence of a network contract. Most markets don’t need such price controls, but when you have price opacity and all the other market failures of healthcare…
This seems relevant: http://time.com/4649914/why-the-doctor-takes-only-cash/
No, it really isn’t.
Everyone who shops at that store makes use of the functional premises, the goods brought in by truck, and the pretty storefront. They’re common goods and can be easily assimilated into the markup. The non-common goods (i.e. the customer who asks the staff person to get out the ladder to reach an object on the top shelf vs. the customer who grabs their own item) add such a negligible cost that nobody minds subsidizing it.
Not everyone who goes into a hospital benefits from the radiologist or the oncologist. And because these things are expensive, it’s not something that can easily be translated to a flat-rate markup.
A better analogy would be an amusement park that doesn’t have a flat-entry fee, and just bills you on exit. Ride the phlebotomy rollercoaster, that’s $x. Go into the haunted mansion of x-rays, that’s $y. Eat five penicillin snow-cones, that’s 5*$z. I think most of us can easily wrap our minds around such a concept.
There’s a difference between making everyone pay for the radiologist via flat-rate markup versus having the radiologist’s fee included in the overall hospital bill rather than being a totally separate bill. The latter is what D’s complaining about, and it’s part of what makes it hard for budget-minded patients to shop around – many hospitals can’t even tell you up front what they’d be likely to charge for a given procedure themselves, and they certainly can’t tell you what their independently-contracting radiologist is likely to charge.
To expand on this: the hospital will have an official price list, called the “charge master,” but virtually nobody will actually pay it. You’re expected to have the price negotiated down by your insurance provider and even if it falls within the deductible you’re still paying much less. And if you have no coverage and can’t afford the full sticker price, they will usually be willing to bargain it down from something completely unreasonable to something only partially unreasonable.
At the top hospitals in NYC the only people who pay the sticker price are Saudi princes, which means the hospitals set prices that only Saudi princes can afford.
I work with provider reimbursement contracts a lot, and I have to say that the whole charge master system has gotten a bit overly complicated…
But to BBA’s point, as a consumer, even if you don’t have insurance, you still have some power to negotiate rates with a provider on your own. I’ve heard of people paying less for certain types of services without insurance. Probably not at a large hospital, but for physician or outpatient services that are a bit more elective. Maybe a minor surgery, or some routine services.
There are some consumer protections if you’re willing to make the effort to discuss with the provider exactly what services will occur and be sure you are not over-charged, but I think most people look at their bill and say WTF is a HCPCS/CPT?? and just end up not bothering, haha. There’s a quirky guy at my office I used to sit next to and I overheard one of his phone calls with the physicians office and through his persistence he negotiated down his bill from like $1000 to $300, just being armed with the right knowledge.
Is there any reason one can’t separate the service of bargaining with providers from the service of paying for them? Are there firms that provide no insurance at all but charge you a much lower price in exchange for getting their customers the same price an insurance company would get?
If not, are there reasons such firms cannot exist?
Essentially yes; some companies self-fund their health insurance, but it’s administered through a regular insurance provider.
On the individual level, I’m unaware of this existing for general medical, but I believe it exists for vision, dental, and prescription drugs.
Yeah, I think it’s a problem. Provider contracts cover a lot more than just reimbursement terms. There are a lot of laws (particularly for HMOs) that require companies to include all sorts of various protections for consumer/payer/provider. I don’t know for sure if these couldn’t be separated for medical services, but I think the fact that it’s not common probably answers that question. I do know that it occurs for Dental and sometimes Rx, although I think are much less common these days. They were referred to as Dental or Rx “Discount Cards”. To Nybbler’s post, yes, these are called Administrative Services Only (ASO) companies. Although, I think that some of the largest companies do self-fund and administer their own health plans, but I think they are still subject to the same laws as the insurers.
@DavidFriedman
In Portugal, there are insurance companies that provide almost no insurance (only in extremely severe cases). I was a customer of one for a good while because of the bargain part. Doctor visits were significantly cheaper (less than half the price) because you would pay the insurance company price, if the doctor was part of the insurance company network.
It’s not only that, the prices are also entirely opaque, for the hospital itself and all of the ancillary providers. Not just post-insurance, but pre-insurance. You literally cannot find out the prices before you’ve had the work done – no wonder the market is so dysfunctional. (Then post-billing there’s always some Calvinball regarding the charge-master rates vs. insurer caps vs. the amount they’re actually willing to settle for (if you can generate a plausible threat of bankruptcy, otherwise you have no leverage)).
> Medical care is pretty much identical
I’m not sure that’s true. Scott Atlas in his book and writings on healthcare suggests that there is indeed more heroic care in the US.
Two salient differences: More in-vitro fertilization (and therefore more premature and twin/triplet births). More heroic cancer care.
Forgot to click the box to get notifications on my comment! To your point Douglas, I agree, but that wasn’t really what my initial post was targeting. Was just commenting on the complications of using medical expenses as a predictor for life expectancy. From my work experience and many things I’ve read, I know one of the biggest reasons for discrepancies in costs between U.S. and other places is the degree of medical waste. The company I work for actually has a tool that can take a medical claims data set and identify the degree of “waste” and determine the potential savings opportunities for providers. We over utilize services to a frightening degree, particularly when there’s no evidence that additional services will provide any meaningful benefit. But, I don’t think necessarily that medical malpractice suits directly drive the increase in costs, but more of a general paranoia that if we don’t do EVERYTHING possible to exactly identify the issue and treat, that something terrible might happen (when the vast majority of times, it won’t). Also, we still have many reimbursement systems that provide bad incentives for physicians/hospitals to over utilize.
Another issue, which I was surprised to read about is just how awfully slow the U.S. has been to adopt electronic medical records. This has created some bloat in the system as well. Doctors spend more time on administrative duties and this results in worse care provided to patients.
Also, it’s pretty obvious that companies (particularly Rx and D.M.E.) know they can use the U.S. market to subsidize the costs of providing at lower costs in foreign markets. Insurers already to something similar – they subsidize the lower reimbursements from Medicare/Medicaid by charging more in the Commercial markets.
Electronic medical records have already been widely adopted. It is a blight on the medical landscape and a major frustration for many physicians.
The electronics records often do not reflect how physicians work. I found that electronic records caused me to work more slowly. Everyone thinks that technology is always great and wonderful but much of the medical record technology is garbage.
For example, one place I worked went to a new electronic scheduling system. The secretaries had to click through ten screens to make an appointment. The result was that they wrote the appointments in a schedule book the old fashioned way and entered them into the computer later.
I did medical chart reviews for a while and it was clear that many physicians are just cutting and pasting visits, probably to get through work more quickly. Thus we end up with long lists of diseases per patient because, when you cut and paste, it is easy to just add a new diagnosis to the bottom of the list. However, this creates a massive amount of background noise. I am less efficient when I go to a chart and I have to wade through a list of every condition that person has ever had to find the critical information. We used to place the old information into a past medical history category but now it stays in the active list because of the way electronic records are set up.
The problem is so bad that many clinics are going to medical scribes who follow physicians around and write down the details of the visit as the doctor meets with the patient. The physician can then review the note and sign off.
When I was in training, we were taught how to do a patient summary, listing pertinent medical details, to send to a consultant. (I am old fashioned, I still did that until my recent retirement.) What I was receiving now-days would be tens to hundreds of pages of electronic records that were all virtually identical. I would have have to look through all of them because they were faxed to me and had no apparent organization. A huge waste of my time when trying to find a few pertinent details on a new patient.
The other issue is the cost of electronic records. I have known several practices that had to close after spending money on electronic records. The cost in investing in the records ended up being more expensive than the original quote, the doctors saw fewer patients, and reimbursements did not increase to cover the costs. It destroyed their bottom line and they had to close.
There is an ideal of having all medical records coordinated electronically but it doesn’t exist (except maybe at the VA). I spoke with one of the biggest integrated networks who was talking about how they were planning to coordinating their hospitals, doctor offices, laboratories and other facilities. They have been held up as an example of one of the most integrated systems but, on questioning, they have actually only integrated a few of the largest players in their system. They just laughed when I asked when they were going to meet their goal of being fully integrated.
Well functioning affordable electronic medical records? Right now I place them in the category of unicorns and leprechauns.
This is a very valuable comment for me, thank you.
Are you a practicing physician (or nurse)?
Thanks, and my condolences for this headache you have to deal with.
If I might summarize, the medical community has had an “upgrade” that:
1. Was originally presented as an improvement
2. Costs more to set up
3. Costs more to maintain
4. Takes more time to use
This fits in to the theory of what I have called “quality accretion” in another post.
Honest question:
What forces and incentives are in place to prevent you to going back to the old system?
Ilya,
I am a family practice physician. I have been in practice for about twenty years. I retired in the last year and I am now earning a living through the Internet and I have a small organic farm to provide many of my food needs. I cannot describe how much abuse I put up with from insurers, administrators, and even patients (one patient even threatened to kill me if I performed a physical exam and they experienced any discomfort) and how happy I am to now have a sane lifestyle. I cannot imagine ever returning to the grind that is the daily practice of medicine.
I was a renegade for most of my practice and I did not go electronic. But I had a cash based system and low overhead costs. I chose this to make my life easier.
However, I sometimes filled in for physicians on vacation who used other systems. Once they went to electronic records and switched their systems over, there is really no easy way to go back.
It is so expensive to implement an electronic record that most places cannot afford to switch to a different system if the one they have does not work well. I know places that have spent $30,000 per provider to implement electronic records. There are probably some less expensive. Then there are monthly costs to maintain the system.
The reason clinics have gone to these records has been mandates from insurance companies and the government. They both have placed decreased payment penalties on practices that do not have electronic records.
The latest has been a requirement for “meaningful use” of electronic systems. The providers must show that they are using the records often enough, in the “correct” manner, and using electronics to communicate with patients or the providers will have their reimbursement docked. This has lead to all sorts of predictable craziness and it especially wounds small clinics and solo practitioners who cannot afford the required electronics.
Some context:
I am not a physician, I am a computer scientist (at Hopkins), but I work in the healthcare space, and think about EHR data (on the analysis end).
My colleagues and I are very interested in making technology of the sort you are referring to actually useful for folks in the trenches, and this sort of feedback is very helpful. A lot of tech in general is poorly thought out from HCI and design viewpoints, and I think it’s particularly true in medicine.
We will keep working on it!
Ilya,
One of the best things you could do would be to involve physicians and nurses when designing the systems.
Do usability studies frequently in the planning process. I am a huge fan of usability studies in every aspect of computer systems.
Also different types of practices need different systems-something that is frequently missed. What works for a small primary care clinic does not match what is needed for a surgical or psychiatric practice or even a large multi-physician practice.
It would be great to have more computer people talking to the medical people on the front lines early in the design process instead of trying to modify the end product.
I’ve had a somewhat similar experience with electronic records in law, another field with a lot of notes and a lot of potential profit for software companies. I’ve also had the experience of finding the old-fashioned way simpler and faster, though in law there’s generally a reason for the ten screens you have to click through. A lot of “being a good lawyer” is learning what corners you can cut in order to give your client the best value for their dollar. There are some cadillac lawyers whose clients just don’t care who get to go around all the corners, though they’re outliers. But officially you aren’t supposed to cut corners, and so software products are designed with all the corners in place. To many lawyers this makes them unwieldy. Or perhaps to put this another way, the professional regulations that used to be flexible have become frozen in software, removing the opportunity for lawyers to use their judgement to save some time.
I volunteer in EMS. I’m a full-time software engineer with $MEGA_TECH_COMPANY.
From what I can tell, the biggest problems electronic medical record systems have is that they need to be general so as to be able to support *any* kind of medical activity. This means you have an interface that is extremely complicated or extremely generic.
Most doctors that I’ve dealt with when they were on paper charts would have individual paper forms that they developed specific to their practice which contained the required information or diagrams for a set of routine procedures. Anything which fell out of scope could be done with a blank sheet of paper.
Instead, EMRs have this need to be able to model any kind of interaction. But there’s rarely any easy/cheap way to customize the interface for a particular set of workflows. So you have the same interface being used by psychiatrists seeing out-patients for repeat visits as you do trauma surgeons trying to fix mystery patients.
The software we use for EMS charting (the most predictable/minimal/constant workflow you’re likely to find in medicine) is special-purpose. And in the 5 years I’ve been using it, it’s been growing extra features and pages of materials which we don’t use. Under state law/certification, our service isn’t allowed to give blood products (we aren’t a Critical Care service). But every chart we fill out has a page we have to click through to enter all of the blood products we administered. A single extra click isn’t that big of a deal. But it adds up.
As an ambulance service, we occasionally do interfacility transports for admissions. But because of cost/HIPAA/whatever, we aren’t granted HL7 data access to a patient we take to a floor instead of an Emergency Department. So we have to re-type everything again. The one area where EMRs could clearly make a difference, we aren’t able to leverage. Don’t worry – the hospitals share data between themselves. Just not with us.
My thoughts on this, as pasted (with minimal edits) from a Medium post I just wrote; https://medium.com/@davidmanheim/chasing-superior-good-syndrome-vs-baumols-or-scott-s-cost-disease-40327ae87b45
I think Scott misses an important dynamic that I’d like to lay out.
Above, he lists of eight potential answers, each of which he partly dismisses. Cost increases are really happening, and markets mostly work, so it’s not simply a market failure. Government inefficiency and overregulation doesn’t really explain large parts of the problem, nor do fear of lawsuits. Risk tolerance has decreased, but that seems not to have been the sole issue. Cost shirking by some people might increase costs a bit, but that isn’t the whole picture. Finally, not in the list explicitly, but implicitly explored when Scott refers to “politics,” is Moloch.
I think it’s a bit strange to end the long list of partial answers, which plausibly explain the vast majority of the issue with “What’s happening? I don’t know and I find it really scary.” But I think there is another dynamic that’s being ignored — and I would be surprised if an economist ignored it, but I’ll blame Scott’s eclectic ad-hoc education for why he doesn’t discuss the elephant in the room — Superior goods.
Superior Goods
For those who don’t remember their Economics classes, imagine a guy who makes $40,000/year and eats chicken for dinner 3 nights a week. He gets a huge 50% raise, to $60,000/year, and suddenly has extra money to spend — his disposable income probably tripled or quadrupled. Before the hedonic treadmill kicks in, and he decides to waste all the money on higher rent and nicer cars, he changes his diet. But he won’t start eating chicken 10 times a week — he’ll start eating steak. When people get more money, they replace cheap “inferior” goods with expensive “superior” goods. And steak is a superior good.
But how many times a week will people eat steak? Two? Five? Americans as a whole got really rich in the 1940s and 1950s, and needed someplace to start spending their newfound wealth. What do people spend extra money on? Entertainment is now pretty cheap, and there are only so many nights a week you see a movie, and only so many $20/month MMORPGs you’re going to pay for. You aren’t going to pay 5 times as much for a slightly better video game or movie — and although you might pay double for 3D-Imax, there’s not much room for growth in that 5%.
The Atlantic had a piece on this several years ago; https://www.theatlantic.com/business/archive/2012/04/how-america-spends-money-100-years-in-the-life-of-the-family-budget/255475/
Food, including rising steak consumption, decreased to a negligible part of people’s budgets, as housing started rising. The other big change the article discusses is that after 1950 or so, everyone got cars, and commuted from their more expensive suburban houses — which is effectively an implicit increase in housing cost.
And at some point, bigger houses and nicer cars begin to saturate; a Tesla is nicer than my Hyundai, and I’d love one, but not enough to upgrade for 3x the cost. I know how much better a Tesla is — I’ve seen them.
Limitless Demand, Invisible Supply
There are only a few things that we have a limitless demand for, but very limited ability to judge the impact of our spending. What are they?
I think this is one big missing piece of the puzzle; in both healthcare and education, we want improvements, and they are worth a ton, but we can’t figure out how much the marginal spending improves things. So we pour money into these sectors.
Scott thinks this means that teachers’ and doctors’ wages should rise, but they don’t. I think it’s obvious why; they supply isn’t very limited. And the marginal impact of two teachers versus one, or a team of doctors versus one, isn’t huge. (Class size matters, but we have tons of teachers — with no shortage in sight, there is no price pressure.)
What sucks up the increased money? Dollars, both public and private, chasing hard to find benefits.
I’d spend money to improve my health, both mental and physical, but how? Extra medical diagnostics to catch problems, pricier but marginally more effective drugs, chiropractors, probably useless supplements — all are exploding in popularity. How much do they improve health? I don’t really know — not much, but I’d probably try something if it might be useful.
I’m spending a ton of money on preschool for my kids. Why? Because it helps, according to the studies. How much better is the $15,000/year daycare versus the $8,000 a year program a friend of mine runs in her house? Unclear, but I’m certainly not the only one spending big bucks. Why spend less, if education is the most superior good around?
How much better is Harvard than a subsidized in-state school, or four years of that school versus 2 years of cheap community college before transferring in? The studies seem to suggest that most of the benefit is really because the kids who get into the better schools. And Scott knows that this is happening. (Linking to previous posts here.)
We pour money into schools and medicine in order to improve things, but where does the money go? Into efforts to improve things, of course. But I’ve argued at length before that bureaucracy is bad at incentivizing things, especially when goals are unclear. So the money goes to sinkholes like more bureaucrats and clever manipulation of the metrics that are used to allocate the money.
As long as we’re incentivized to improve things that we’re unsure how to improve, the incentives to pour money into them unwisely will continue, and costs will rise. That’s not the entire answer, but it’s a central dynamic that leads to many of the things Scott is talking about — so hopefully that reduces Scott’s fears a bit.
I broadly agree.
* People have more money now
* They are going to spend it
* For most things, diminishing returns kick in very quickly. Buying a fancier computer has zero benefit for most people. Even the most dedicated gamer only has so much time. Buying $400,000 cars are largely a waste.
Housing, education and health case are some of the few things where you can actually buy something better. In comfort even their primary function stays the same.
I don’t buy it. I would expect to see some combination of superior/more costly housing, education, etc., and some increased leisure. And yet, we see no increased leisure at all. Do people just not value leisure at all?
Personally, I’ve noticed that the biggest bottleneck in my life is not money, but time. You couldn’t get me to spend more money even if you asked me to because I wouldn’t have the time to do it. I’m busy playing Rimworld and commenting on the Internet. If someone offered me more hours at work, I’d say, “What for?” What do I need more money for?
The thing is, there is one thing that it is impossible for everyone to do: be a rentier. Imagine that everyone spent frugally enough to save up what today would be considered a sustainable endowment: perhaps $1 million. Then imagine that everyone tried to live off the interest. My guess is that interest rates would collapse due to the overabundance of money capital (with everyone trying to save), and people would not be able to live off the interest no matter how large their savings got. After all, if nobody is working for a living, then how is everything going to be produced?
But leisure time HAS shot up (That’s how you have time to play Rimworld and comment here)
https://www.bostonfed.org/-/media/Documents/Workingpapers/PDF/wp0602.pdf
“We document that a dramatic increase in leisure time lies behind the relatively stable number of market hours worked (per working‐age adult) between 1965 and 2003. Specifically, we document that leisure for men increased by 6‒8 hours per week (driven by a decline in market work hours) and for women by 4‒8 hours per week (driven by a decline in home production work hours).”
I’d be curious to know how well this correlates with declining birth rates. From my own observation, the largest predictor of leisure time among adults my age (mid-30s) is whether or not they have children. Those of us DINKs have a lot of money and time to play with that traditional families do not.
Labor supply goes down –> wages go up –> marginal retirees enter the labor market –> market clears
More like labour supply goes down -> immigrants (legal or illegal) from poorer countries imported to do the work -> wages stay at same levels or even decrease
Nathan Taylor (of the praxtime.com blog ?) was retweeted by Robin Hanson, and has a different argument related to a paper of Hanson’s, which is related to my argument, and I also liked – Thread;
https://twitter.com/ntaylor963/status/830052771575377920
“FWIW, I suspect cost disease worst in most important areas of life precisely *because* we want to show we care about them. A bitter irony.”
“Here’s the @robinhanson paper on healthcare not about health, but showing we care.
http://mason.gmu.edu/~rhanson/showcare.pdf ”
“Here’s a good write up of this health argument. You can see why also can apply to education, etc.
https://www.bloomberg.com/view/articles/2016-08-23/health-care-is-a-business-not-a-right “
I’d say that “cost disease” is associated with the most important areas of life because the phrase suggests the existence of a problem, not just a neutral fact about the market. When the same dynamic occurs in a market that people don’t care so much about– for example, domestic servants a century ago– everyone shrugs and substitutes away from the good or service that’s getting more expensive. The articles bemoaning “cost disease” get written about things we don’t want to see people substituting away from.
I haven’t gone through all the comments yet, but how hard would it really be to look at a bunch of budgets from public schools ranging from like 1965 to the present and actually just physically look at where the money used to go and where it goes now? Has anyone does this research? It seems like a really obvious first step.
That’s been done – the best/easiest data source is here:
Employment (50% of staff are non-teachers)
https://nces.ed.gov/programs/digest/d13/tables/dt13_213.10.asp
Costs:
https://nces.ed.gov/programs/digest/d13/tables/dt13_236.10.asp
Thanks, this is great!
I work in a physics lab, and we use a lot of scientific equipment. Scientific equipment is expensive. Sometimes, this is because it is expensive to make, but sometimes, it’s just because people are used to paying a lot of money for it, and are suspicious when it is too cheap.
I recently started using a $25 consumer electronic device in place of something that would normally be a $2000 piece of specialized equipment. It occurred to me that I could start making a version of the $25 device, with some minor tweaks to make it more user friendly for technical work, and charge as much as I want, so long as it’s cheaper than $2000, especially because the sort of lab that would want it would need to have a big budget in order to operate at all. Actually, I couldn’t sell it for however much I want, because if I tried to sell it for $25, nobody would buy it because they wouldn’t think it was “real” scientific equipment. I’m not even sure how much would be enough. $200? $500?
I don’t know if this can account for the $700 bag of saline or not. I imagine hospitals employing people who purchase saline, and I don’t imagine these people being as susceptible to the real-medical-equipment-is-expensive bias as a graduate student in a physics lab, but I can imagine there being a strong enough desire to avoid lawsuits to bias someone toward the option that seems more legitimate, and appearing legitimate usually involves price.
If someone tried to sell “made in Thailand costs 50c” bags of saline to a hospital, I wouldn’t be surprised that suspicions about quality control would be triggered, but you would need one heck of a “more expensive is better” mindset to pay $700 for a bag of saline. I could see paying $10, I could even see bumping it up to $50 for extra-special guaranteed not to trigger any weird allergies saline, but not that kind of money. There has to be a reason behind that price, if it was a genuine one.
As I understand it, in the US our hospitals have the following problems: (1) as Scott mentions, people who can’t pay show up with emergencies and it’s illegal to deny them care, and (2) most insurance companies aggressively negotiate down the prices of the services they cover, though some are more aggressive about it than others, particularly Medicare and Medicaid. So the hospital may only pay $1 per bag of saline to its supplier, and you’d think $5 or so would be a reasonable price to just charge everyone, but if they bill the patient $700 then they’ll get various insurance companies paying $2-$50 or more per bag and the occasional non-insured person who isn’t judgement-proof like Scott’s “frequent flyers” and winds up having to pay more or less the full $700. And this is what they feel like they have to do to make ends meet.
Yes, I suppose distrust of low-cost medical supplies can only explain how much the hospital pays a supplier, not how much patients pay the hospital. It sounds like “IV therapy” is seen as a convenient place to charge patients or their insurance company to offset other costs to the hospital.
To me… to me it sort of makes intuitive sense.
It feels like the natural way of complex systems.
I’ve worked in small, medium and large companies/organizations and young/old organizations/departments.
And there’s a pattern. It’s a Molochian in that most of the people involved can see the problems around them and there’s logical reasons for each contributing thing but together they become a set of concrete shoes for the organisation and there’s no easy solution.
Many programmers know what it’s like with big complex old systems.
In a group with a few people writing new code vast amounts of work per person can get done in extremely short time-spans.
In a big old company things move at a glacial pace. If it’s a good company though they at least move and get there eventually.
In the latter you find yourself tied into mostly pointless meetings, anything of substance you need to do ends up needing to be run by 20 different people.
Technology doesn’t help with this. It can help short term but mostly technology merely facilitates this.
There’s an accompanying growth of paperwork. This isn’t the mean old government or regulation(at least not always). In a private firm working for private firms there were often massive sets of required paperwork.
Sometimes some of this paperwork becomes automated. Forms you previously needed to fill out get automatically filled in by some program so you don’t need to do it.
But I said before, technology doesn’t actually help. It just allows you to ignore the problem for longer.
And each individual item is there for what feels like a good reason. Once someone killed someone and the decision afterwards was to add item 101 to the checklist to make sure there was nobody in the dangerous zone at that step. If you remove it it’s entirely foreseeable that you could end up in court with someone saying “and the safety measure that could have saved little billies life was removed to save a few pennies and seconds by you!”
Technology does not help though because this has always been a problem that expands to meet the capabilities of the system, until it forces people to deal with it. Automate 10K pieces of paperwork and it just means the system can now cope with 10K pieces plus whatever else you can cope with before someone needs to take the hit and change things.
It’s not just safety measures, it’s not just liability.
It’s everything. If technology allows you to keep fulfilling a legacy requirement to talk to some other system then you will. If technology takes up some of the strain it allows more load to be placed on the system.
So it’s not surprising to me that those schools cope better. Their employees are probably happier too since they don’t spend their time on what amounts to system maintenance. Give it 40 years though and they’ll probably look a lot like the other schools unless you’ve got some very unusual individuals in charge.
As far as I can tell no organization or organization type is immune to the phenomenon.
Some are a little more resistant. organizations which explicitly resist complexity and making their systems more complex at every step: the decay is slowed but not halted.
it’s also a tad cancerous. Once you have 10K policies and have people who’s jobs are to write policies and policies on dealing with policies the additional cost of 1 new one is very small and there’s always a “good reason”.
It’s at every level, it’s not just paperwork, it’s also in the interactions between employees, it’s in the interactions with clients.
To be clear:
Cutting out just one area gains you very little. Cut out every government regulation and you’ll just find yourself in the same place in 10 years.
It’s not just regulations, it’s also organizational memory and organizational trauma in the form of policies to deal with things which have harmed the organization in the past.
Add to that: in a city like new york for a big project you have to interact with thousands of legacy systems of various sizes to some degree.
Some very very old institutions have structures to deal with it, often they take the form of modularity. Organization structures that allow the creation and destruction of entire departments or other modules as a whole, wiping the slate clean.
The principles of system design and system architecture applies to everything, not just software.
The US has been rich with strong institutions for a long time. Comparative newcomers don’t need to deal with hundreds of years of legacy systems in countries where everything was burned to the ground 50 years ago.
Libertarianism does not solve the problem, it just kicks the can down the road a little unless there’s massive churn of every system and meta-system as well with the incentives aligning perfectly.
The difference is that if you grow increasingly inefficient in the private sector, someone else will eventually come along and supply a superior value proposition. There are limits to how much bloat can accumulate. There is no such mechanism to self-correct the areas Scott is discussing.
It allows an organisation to survive a little bit longer but a multi billion dollar multinational can survive a great deal of it and can be weighed down massively while still having the clout to stamp on competitors.
Vote Reapers 2020! It’s the responsible choice to avoid sclerotization!
Excellent discussion. Many of the proposed explanations in the comments seem like plausible factors; however, much of that discussion doesn’t address why those factors would be unique to the US.
Organizations also often do in-house software development/maintenance/support, which we didn’t need to do before, is becoming more popular, and can cost absurd amounts of money.
I don’t think there any simple explanations, or any simple solutions. In other words, these effects have multiple causes, and there is no easy way to fix things.
On the other hand, it is not difficult to see contributing factors, and suggest moves that, if made, would reduce costs. It is just that we are unwilling to make those moves, possibly for good reasons.
Consider health care and education, where this effect is plausibly the strongest.
Health care: suppose health insurance was permanently banned. Within a very short time, healthcare costs would go dramatically down. This is because no one would get paid except sums of money that people actually have. So they would be forced to set prices at a level that people could pay, which is far lower than the sums that actually get paid by insurance. So there you have an intervention which would reduce costs, but no one is willing to make that intervention, plausibly for very good reasons.
The nature of that intervention suggests one of the contributing factors to the problem. Insurance appears to have the ability to pay an unlimited sum, and people appear to have an unlimited amount of desire for healthcare. Given such a situation, supply and demand would lead to an unlimited price. Of course this effect is limited in reality because both are false: insurance can pay more, but not an unlimited sum, and people’s desire for healthcare is not really unlimited. But these things are approximately true at least to the extent that as society gets richer [i.e. the one who is actually paying rather than the sick person], healthcare costs will rise for these reasons.
The same arguments can be made in the case of education, with the proposed intervention being “permanently ban student loans.”
I can confirm that the cost breakdown of University has a lot to do with “student life” stuff. Students might not, if asked point blank, prefer this to 1975 college, but the point is moot because Universities are locked into a competition with their peers, and happening student life is very visible and a good recruitment tool.
—
I worry that if you see this type of cost increase in a lot of disparate areas it might have to do with old intractable problems like the principal agent problem (although I might be a bit like Grandpa Simpson and see the principal agent problem around every corner). In other words, the mechanism for cost increase is broadly that we ‘conjure up’ entities to do various tasks for us in a modern civilization and those entities start to self-preserve and expand, with associated cost increases. And nothing really ever stops them, because they explicitly expend resources to justify this, every time. The problem with this explanation is why do the graphs look like this _now_ and not previously?
Or perhaps this is related to some sort of “society physics” type of law where the only equilibrium is a large mass of people living on the edge of starvation with a small group of hyper-rich elites — and all moves are moves towards this equilibrium unless explicitly stopped. The problem with this explanation is we need to posit situations where this trend gets reversed (otherwise we would already be at equilibrium).
Re: society physics.
What does SSC think of The Refragmentation by pg?
tldr Inequality is the logical conclusion of technological progress. Except WWII begot the Hansonian Dream-Time.
ps Dream-Time is an old classic from 2009. But Both Plague & War Cut Capital Share? which Hanson posted yesterday also seems relevant, though I haven’t read it yet.