So I’m reading Henry George at the moment, a 19th century economist famous for his analysis of land ownership and land value taxation. In his work Social Problems, in the chapter entitled ‘The Functions of Government’, he had an interesting proposition, one I’ve heard before but which he enunciated very well.
Basically, he proposes that any industry wherein a natural monopoly develops should be taken over by the state. Examples of such natural monopolies he gives are railroads, telephones, water, gas, electricity, etc. I’m intrigued by the idea and I see the sense in his analysis. One element he didn’t touch on, however, is health care.
Regardless of whether one agrees with George on this or other matters, I was curious whether you guys think that health care constitutes such a natural monopoly. It would seem in some ways it does, but I’m not entirely sure. I think a case could be made for or against such an idea.
The chapter is included in the link below. It’s a PDF file. Chapter 17.
Actually he was a proponent of free trade. He proposed the abolishment of all tax upon production and sale, of all taxes save a tax on land value. His point in advocating for government takeover of natural monopolies was that such monopolies soon form combinations which Adam Smith said always conspire against the public, and which George pointed out eventually pervert government itself through subsidy, protection against competition and through corruption.
Typically, to be considered a natural monopoly, an enterprise needs to have massive economies of scale, such that it would prohibit any new entrants to the market from competing effectively once an incumbent has been established.
Traditionally, the requirements for this are to have very high fixed or startup costs, with relatively small marginal costs, however I think enterprises that have massive synergies at scale (think Google or Facebook) may also fit the bill, given the right conditions.
I don’t think health care in general is a natural monopoly, but hospitals very likely are, and perhaps even insurance. Once a hospital has been established in an area, it becomes prohibitively expensive for a new hospital to open up and compete with it, unless the market is so large that the first hospital cannot handle the capacity. So hospitals would be natural monopolies within their localities.
However, private physician practices, or even groups of physicians, would not be natural monopolies, as far as I can tell. While incumbent physicians may have some advantages with respect to existing patient relationships, I don’t think those advantages would be so insurmountable that they would make it impossible for new practices to compete with them. The startup costs for opening a new practice group may be high, but they are manageable, as new practices open up all the time.
That is why I say that not all aspects of healthcare would necessarily be a natural monopoly.
Insurance may also be a natural monopoly due to synergies it can experience at scale. Specifically, if you have an absolutely massive insurance program (e.g. Medicare) it can bargain for better prices and more complete coverage from healthcare providers, and therefore offer cheaper premiums and more comprehensive policies to its customers. No new startup would be able to compete with a insurance provider of that size.
However, insurance is a weird market. Insurance policies tend to be very complicated, and therefore have a lot of holes in them, and competition could arise by offering coverage not offered by the massive provider. That is why, even though Medicare effectively has a monopoly on coverage for those over 65 in the U.S., there is still a market for Medicare supplement plans.
So I would say insurance is a natural monopoly, with some caveats.
Speaking as an economics professor . . . most of what we have historically taken to be natural monopolies are not . . .
There have generally been competing railroad lines.
Areas with multiple cable companies have lower rates.
The crown jewel is probably lubbock, tx, which still has competing power girds. Not surprisingly, it has about the lowest rates. I knew another economist who measured the increase in rate by distance from lubbock–even in monopoly territory on either side, the potential of the other grid expanding exerts downward pressure.
“Healthcare” is not an entity; it’s thousands upon thousands of providers essentially all of whom can be replaced with multiple others.
Governments do not have an impressive track record when socializing things . . . healthcare being one of many examples.
and as @white_tree notes, physician offices do not have increasing economies of scale. After a handful of physicians, they get diseconomies of scale. (these exist in all organizations: the managers need to be managed. Then the managers of the managers need to be. And then . . . the “natural” size is when the dieseconomies and economies of growing larger equal one another)
I don’t have super-developed ideas on the topic myself but I think what @White_Tree and another commenter mentioned reflects some of my own (less-articulated) thinking.
That is, insofar as this question may require borders around where ‘healthcare’ starts and ends (e.g. dental care is actually important for preventative healthcare, but isn’t usually considered part of a healthcare ‘system’), and which elements of an overall system eventually constitute a monopoly there’s no reasonable competition with (like a hospital in a low-population area), versus which elements may reasonably stay competitive (like a private clinic).
It seems reasonable to me. I think as large corporations become bigger and more globally linked you’re going to start seeing citizens of nations granted shares and paid dividends to offset automation and replacement by migrant workers.
That is very much a fair question, as there are so many who have politics first (I’m looking at you, Paul Krugman!), and asking where someone is coming rfrom is a fair question.
I’m one of those rare ones that’s actually willing to deal with general equilibrium, rather than the various -isms and schools.
The “general” progression was from classical, to general acceptance of Keynesianism (“demand side”), and the famous"We are all Keynesians now."
And then the stagflation of the 70s happened, which keynesianism had clearly proved wasn’t possible.
Huge portions of keynsian thought (as in, stimulus policy) come from pretty ucha assuming that we’re always in a “liquidity trap”–a theoretical situation posed by keynes which he himself doubted could actually happen.
The “Phillips curve” noted a historical relationship between wage inflation and unemployment (lower unemployment in times of higher inflation), and government started using inflation as a tool to fight unemployment. Milton Friedman, then a voice in the wilderness (and generally considered crazy by mainstream economists) argued that this was because non-zero inflation was effectively a “surprise” when it happened, and that the change in employment took advantage of (or avoided) the price or wage change, and that, well, the 1970s would happen if we tried to use the relationship as a tool, because expected inflation would increase.
The 70’s happened, and Friedman got his Nobel prize. (the jury is very much still out on his opinions on gold. I’d dismiss them, but with his track record . . .)
In the ashes of the aftermath of dogma having burnt to a crisp, the new classicals came out with new analytic tools (Rational Expectations , etc.), and produced models generally consistent with classical economics but with a rigorous math basis. They generally find a lack of a role for effective governmental policy (other than that the amount of government spending, however financed, has consequences)
The New Keynesians adopted the tools of the New Classicals outright, but changing the assumption of flexible prices to “sticky” prices. They conclude that there is a role for policy, but acknowledge that it is far smaller than Keynesian economics claimed.
I see the stickiness as a variable to be measured: rather than using P(t) or P(t-1), it should be (1-s)P(t) + sP(t-1). s will vary over time, place, and circumstance.
The “Supply Side” Economics was a consequence of going all the way to the fringe of Demand Side under Keynesianism. With the effects of supply having been completely ignored for decades, focusing on it was bound to reap fruit. But the “religious” supply siders fall off the opposite edge of the world as the Keneyesians. However, it is worth noting that the reason it was easy to recruit Reagan was that he had observed this as a B actor: they made three movies a year and spent the rest on their ranches (or whatever), because they found that they got paid basically nothing for the fourth and later after taxes. The Reagan and Thatcher tax “cuts” were actually rate cuts, and increased revenue (as did the Kennedy cuts before them). When tax rates are high enough, people do reduce effort, and increase effort if cut from there. Where that point is clearly varies from country to country (much higher in scandinavia than the US and Britain, and in between for most of the rest), but taxing above it is silly, as it means that the rest of us are paying for the privilege of taxing them more.
Hayek was largely common sense, but the “Austrian School” that followed (and deified) him was more anti-communist than economist.
I (grudgingly) acknowledge that on policy issues, the monetarists seem to have the best record.
So with all of that said, i’m a pragmatist that thinks the things that divide the schools are generally data to be measured, and that belonging to any really doesn’t make sense . . .
so there’s a 4,200 word answer to your short question.
Oh, and there is a “Friedman-Phelps augmented curve” which looks at deviation from expected unemployment and the relation to unemployment which yields the same results as Phillip’s when the government isn’t trying to game the relation, and shows that trying to use inflation against unemployment is silly.
And the 82-83 recession was deeper than it would have been because the Fed had to buy credibility: for a few recessions by then, they promised not to use easy money (inflation) to fight unemployment–and went back on their word when the next one came, promising not to do it again.
The consequence was a worse recession than would have happened that time, but pretty much a 30 year expansion–the Bush and Clinton recessions were more “breathers” than actual recessions.
And when you hear someone refer to “The Great Recession”–it tells you that he hasn’t a clue what he’s talking about. It was a run of the mill recession, but since the last “real” recession was 30 years earlier, young to mid-career journalists had never seen one before. It was short, of “typical” severity, and unusual only in that it wasn’t followed by a “normal” expansion. I’d call the period between it and the reaction to the 2016 election an “interregnum”, an unprecedented and unique period that was neither expansion nor recession.
Thanks for your answer. A lot to digest there. Hope my question didn’t come off as snotty and I’m glad you don’t seem to have taken that way.
I am an amateur when it comes to economics. Don’t get into the highly mathematical stuff but enjoy the topic in general. I’ve found Henry George an odd egg but extremely readable. Interestingly Friedman thought his land value tax made sense.
I’ve personally found valuable insights across the spectrum of the schools. Austrians with their economic calculation argument and Hayek’s idea of prices as signals; the socialists at Monthly Review with their analysis of monopoly and finance capitalism (one reason among many that I said on another thread to you that free markets were largely a myth); the Georgists; etc.
One idea I’ve been reading a lot on lately but which I don’t accept in the least is participatory economics or parecon for short. I like systems even if I disagree with them.
I’ll look up the Texas town you mentioned. That’s the kind of concrete stuff I especially like.
Thanks for sharing that example. I hadn’t heard of that study before.
Though it’s worth mentioning that there are more effects worth measuring than simply price.
A friend of mine used to be an electricity trader, and he was actually one of the people responsible for causing the California Energy Crisis back in the early 2000s. He loved deregulation, because it allowed him to bilk a boatload of money out of people who didn’t fully grasp all of the consequences of the new market regime. However, even though it was great for his bank account, he didn’t think it was good for society as a whole.
Aside from the obvious negative consequences of things like the California Energy Crisis, deregulation, at least as he explained it to me, had a negative impact on the integrity of the grid.
The reason is that, under the regulated environment, the prices the power companies were allowed to charge were mandated to be a fixed markup over their operating costs. Therefore, if you wanted to make more profits, you needed to have higher costs. So naturally, what did they do? As my friend put it, “Gold plate everything.” Drive up their costs so they can drive up their revenues.
That sounds like a terrible system, doesn’t it? Well, it does, until you consider that they were pouring all that money into their infrastructure. That’s how they managed to increase their costs. Fortifying the grid, building new plants, etc.
Once they entered a deregulated environment, there wasn’t any incentive to do that anymore, so they scaled back all of their infrastructure investments, started to cut corners, etc… They no longer had the same incentive to maintain it, which ultimately leads to a weaker grid.
It’s easy to assume that a system that leads to lower expenditures by the electric companies and lower prices for consumers strictly dominates an obviously bizarre system in which the power providers are incentivized to increase their own operating costs. But there are two sides to everything, and one of the consequences of the deregulated solution is less reliable infrastructure. I suppose most people would prefer to have the lower costs, though they tend not to feel that way when the infrastructure they have taken for granted for years finally fails.
Parts of the healthcare and health insurance system tends towards monopolies. In my opinion, given the amount of risk involved and underwriting that over a larger population, health insurance should be a public good. Like Medicare (a public program). This would lead to less of a burden on employers and much more financial and health security for families and all citizens.
There’s a lot of things market economies are best at, but there are some goods we don’t want the market to sort out efficiently (where efficient means supply meets demand). In an efficient market, some people would be priced out of education, utilities, public security, road access… There are things we want everyone in the public to have access to.
Taking these snippets totally out of your intended context: my personal starting place is to agree with your conclusion, while still being unsure exactly how healthcare should be managed, for a reason related to the top two snippets.
I personally actually do believe in a public healthcare system (my country has it and it’s not really controversial no matter what major party someone supports).
While at the same time, I have heard arguments that seemed persuasive to me, that there’s a reason so much medical innovation comes out of the USA, and that it’s to do with competition and private market forces encouraging risk taking and innovation, whereas publicly owned sectors tend to face pressures not to take risks or internally compete one idea against another. There’s a fear of wasting taxpayer money on 99 things that don’t work just to find one that does, so certain innovative experiments don’t get tried (or at least not with optimal effectiveness), goes the argument.
I don’t know enough to really take a position. Just the vague thoughts that float around in my brain about it.
And this isn’t meant to be a jab at you particularly, and not to 100% dispute your point. But meanwhile many Americans go bankrupt, are priced out of certain treatments, and financially struggle, and spend tends of thousands of dollars out of their own pocket a year if they have serious (even chronic) illnesses even when they have insurance. There’s all this talk about how Americans have the best health care. Yeah… if they can afford it. Meanwhile these innovations (should that be true) can benefit other nations that actually publicly pay for their citizen’s health care, while Americans suffer the burden of being priced out or going bankrupt.
Oh 100%, I agree with you. As I said (or at least meant to imply), I prefer my country’s base system because at least everybody has basic access without having to worry about bankruptcy.
It’s more, in the name of perfecting things, I’m curious how my own country’s system might try theoretically imitating elements of the American competition-accelerates-innovation system, without cutting off the beneficial outcomes from anyone.
Just to add a bit more. Many Americans now have “high deductible” and “high out of pocket” plans. The idea is that if Americans have to pay for their own care out of their own pockets they’ll be wiser about what care they get. Deductibles can be $1100 to $6000 a year. Americans can also be expected to pay $6000 to $12000 a year out of their own pocket. And these are Americans WITH insurance plans. Oh, and those limits only apply if they specifically go to doctor’s in their insurance plan’s network. If they go out of network the services are either not covered at all or subject to much higher limits. So one major accident can result in incredible costs. And I deal with people who have diabetes and have to pay their deductibles and coinsurance for insulin, which can run them $6000 to $12000 a year, every year. These are insured individuals. And that cost doesn’t include premiums.
My wife also gave birth almost two years ago and we’re paying down our $8,000 in medical bills. We had insurance, too. And we were on the richest benefit plans that were available to us.
You could potentially create a type of hybrid system, akin to the U.S. patent system. Say, if some company develops a new treatment, they are allowed to charge stupid prices for it for a given number of years. After that period, it has to be priced as regular, regulated treatment. It would still be the public coffers that are paying for it in both cases–otherwise, people might not be able to afford it, and no one would want to make new treatments–but it would amount to only granting the gratuitous funds to successful research. That’s more politically palatable than sinking money into failed research projects. Failures are a necessary part of research, but the public has a hard time understanding that, unfortunately.
Obviously, a system like I just described would be more expensive, but that’s because you’re now awarding money for R&D, which seems like a worthwhile use of funds.
You’d have to be careful, though, to avoid people gaming the system. In the U.S., when a company’s pharmaceutical patent is about to expire, they can make a tiny, tiny tweak to it, and thereby qualify themselves for a new patent, which starts their clock again, even though they haven’t actually made a new treatment. From what I’ve heard, India has prohibitions against that sort of gaming of their patent system, but healthcare lobbies in the U.S. are very powerful, and have managed to avoid any fixes to the system that would threaten their massive profits.
That is another thing to be concerned about. Rather than taking their “prize money” and using it for more R&D, corporations could potentially devote their newfound wealth to lobbying, which could allow them to squeeze more money out of the public coffers without actually producing anything useful in exchange. I’m not sure how much of an issue that is in your country, but it is a major factor behind why the healthcare market in the U.S. is so broken.