Article by Meir Kohn, Prof. of Economics at Dartmouth, by The Cato Institute
How I Became a Libertarian
The consequences of intervention are rarely what we expect or desire.
I did not become a libertarian because I was persuaded by philosophical arguments — those of Ayn Rand or F. A. Hayek, for example. Rather, I became a libertarian because I was persuaded by my own experiences and observations of reality. There were three important lessons.
The first lesson was my personal experience of socialism. The second was what I learned about the consequences of government intervention from teaching a course on financial intermediaries and markets. And the third lesson was what I learned about the origin and evolution of government from my research into the sources of economic progress in preindustrial Europe and China.
Lesson 1. My Personal Experience of Socialism
In my youth, I was a socialist. I know that is not unusual. But I not only talked the talk, I walked the walk.
Growing up in England as a foreign‐born Jew, I did not feel I belonged. So, as a teenager, I decided to emigrate to Israel. To further my plan, I joined a Zionist youth movement. The movement I joined was not only Zionist: it was also socialist. So, to fit in, I became a socialist. Hey, I was a teenager!
What do I mean by a socialist? I mean someone who believes that the principal source of human unhappiness is the struggle for money — “capitalism” — and that the solution is to organize society on a different principle — “from each according to his ability; to each according to his needs.” The Israeli kibbutz in the 1960s was such a society. The youth movement I joined in England sent groups of young people to Israel to settle on a kibbutz. When I was 18, I joined such a group going to settle on Kibbutz Amiad.
A kibbutz is a commune of a few hundred adults, plus kids, engaged primarily in agriculture but also in light industry and tourism. Members work wherever they are assigned, although preferences are taken into account. Instead of receiving pay, members receive benefits in kind: they live in assigned housing, they eat in a communal dining hall, and their children are raised communally in children’s houses, and can visit with their parents for a few hours each day. Most property is communal except for personal items such as clothing and furniture, for which members receive a small budget. Because cigarettes were free, I soon began to smoke!
Kibbutz is bottom‐up socialism on the scale of a small community. It thereby avoids the worst problems of state socialism: a planned economy and totalitarianism. The kibbutz, as a unit, is part of a market economy, and membership is voluntary: you can leave at any time. This is “socialism with a human face” — as good as it gets.
Being a member of a kibbutz taught me two important facts about socialism. The first is that material equality does not bring happiness. The differences in our material circumstances were indeed minimal. Apartments, for example, if not identical, were very similar. Nonetheless, a member assigned to an apartment that was a little smaller or a little older than someone else’s would be highly resentful. Partly, this was because a person’s ability to discern differences grows as the differences become smaller. But largely it was because what we received was assigned rather than earned. It turns out that how you get stuff matters no less than what you get.
The second thing I learned from my experience of socialism was that incentives matter. On a kibbutz, there is no material incentive for effort and not much incentive of any kind. There are two kinds of people who have no problem with this: deadbeats and saints. When a group joined a kibbutz, the deadbeats and saints tended to stay while the others eventually left. I left.
In retrospect, I should have known right away, from my first day, that something was wrong with utopia. On my arrival, I was struck by the fact that the pantry of the communal kitchen was locked.
Lesson 2. My Teaching — The Effects of Government Intervention
Although I was no longer a socialist, I was certainly not
a libertarian. I believed in a market economy and the importance of
incentives: I had begun to study economics. But economics also taught me
that market outcomes, and society more generally, were often imperfect
and that they could be improved by the judicious use of government
power. I was a progressive.
Progressivism rests on two critical assumptions. The first is that we know how to improve society: “social science” provides us with a reliable basis for the necessary social engineering. The second critical assumption is that government is a suitable instrument for improving society. My second and third lessons taught me that these two critical assumptions were unfounded and unrealistic.
For many years, I have taught a course on the economics of the financial system; I have also written a textbook on the subject. Government regulation is an important topic in this course. The need for such regulation seems like a no‐brainer. The financial system is obviously unstable: look at all those crises, including the stock market crash of the 1930s and the financial crisis of 2008. Surely we need government regulation to stabilize the financial system? But looking at the evidence, I came to believe otherwise. I saw that the history of the U.S. financial system could be understood as a series of cycles: the government intervenes in the financial system; the financial system adapts to the intervention; this adaptation makes the system more fragile and unstable, eventually resulting in a crisis; the government responds to the crisis with additional interventions intended to stabilize the system, and we have begun the next cycle.
The first of these cycles in the United States began almost two centuries ago, in 1832, when President Andrew Jackson vetoed renewal of the charter of the Bank of the United States, the sole national bank.
A consequence of this action was that, subsequently, banking in the United States was regulated solely by the states. The states prohibited interstate branching and often prohibited branching within a state. As a result, banking developed in the United States as a system of thousands of small banks. Since small banks are far more likely to fail than large ones, the history of American banking was one of frequent banking crises and panics, culminating in the great banking crisis of the 1930s.
At the time, many argued (rightly) that the solution to instability of the banking system was to remove the regulatory obstacles to consolidation. However, Congress, catering to special interests, came up with a different solution: deposit insurance. Even President Roosevelt, not exactly a libertarian, understood that this was a bad idea. He realized that it would allow banks to engage in risky behavior with no danger of losing depositors, an example of a problem of insurance known as “moral hazard.” So began the second cycle.
The moral hazard problem expressed itself in banks cutting their capital ratios. Before deposit insurance, a typical bank had funded about 25 percent of its assets with its own capital. This had the effect of protecting depositors against losses on the bank’s loans. Consequently, depositors had paid close attention to their bank’s capital ratio. If it fell too low, they withdrew their deposits. However, with the creation of deposit insurance, depositors no longer cared about their banks’ capital ratios. Banks responded by steadily reducing them, thereby increasing their leverage and thus their return on equity. The fall in capital ratios also had the effect of making the banking system far more fragile in the face of a shock.
For decades, however, there was no shock. For unrelated reasons, the environment remained unusually stable. By the 1970s, capital ratios had fallen as low as 5 percent. Then, in the late ‘70s, a steep rise in interest rates caused a rash of bank failures, culminating in the savings and loan crisis of the early 1980s.
Regulators, rather than admitting that deposit insurance had been a mistake, responded by doubling down. To address the moral hazard problem, they instituted capital requirements to force banks to increase their capital ratios. They also introduced a new form of government guarantee: the doctrine of “too big to fail.” So began the third cycle.
It was adaptation to the new capital requirements that set up the financial system for the financial crisis of 2008. Because nonbank financial institutions were not subject to capital requirements, profits could be increased significantly by shifting lending from banks to nonbank lenders. This happened on a massive scale — especially with mortgage lending — leaving the financial system as a whole with a very low effective capital ratio and consequently in a very fragile state.
Then, in the 1990s, the federal government began to promote subprime mortgages, lending to borrowers who would not have otherwise qualified for a mortgage loan. Since the government implicitly guaranteed most of these mortgages, lenders considered them safe. As a result, many financial institutions — banks, securities firms, and others — invested heavily in these instruments. In 2006, the housing market turned down and subprime defaults began to mount, leading to a major financial crisis in 2008.
What is the lesson from all of this? It certainly seems that government intervention, far from stabilizing the financial system, has been a major cause of its instability. For example, the crisis of 2008 was not caused by “greed on Wall Street” but rather by incentives distorted by two centuries of government intervention.
Does this mean that without government intervention the financial system would have been stable, or at least more stable? To answer this question, a study by Charles Calomiris and Stephen Haber compared government intervention and financial system stability across countries. They found that, indeed, more intervention is associated with greater instability. Their most interesting comparison is between the United States and Canada — two economies that are similar in most respects, except that the Canadian government has intervened very little in its financial system. The result? Since the presidency of Andrew Jackson, the United States has experienced 12 major banking crises. In the same period, Canada has experienced not even one — not in the Great Depression, not in 2008.
The lesson for progressivism is clear: we don’t understand the economy and the effects of intervention well enough to be able to improve things. The economy is a complex system that adapts to intervention in ways that are inherently unpredictable. The consequences are rarely what we expect or desire. So, for me, the first pillar of progressivism crumbled. We don’t know how to make things better through government intervention.
Lesson 3. My Research — The Nature of Government The second pillar of progressivism is the belief that government is a suitable instrument for doing good. This pillar crumbled for me as a result of my research.
For some time, I have been developing a theory of economic progress based on the evidence of preindustrial Europe and preindustrial China. My theory differs from textbook economics in several ways. In particular, it suggests a very different understanding of government.
For textbook economics, economic activity means production. However, looking at the historical evidence, there are two other ways that people make a living — two other economic activities. The first is commerce, buying and selling the goods that others produce. The second is predation, taking by force the goods that others produce or trade.
Economic progress can be understood in terms of the different effects of commerce and predation. Commerce makes it easier for people to trade with one another. The resulting expansion of trade leads to increased productivity, which creates opportunities for further expansion of trade. Economic progress, therefore, is a self‐perpetuating process. Why, then, isn’t every nation wealthy? The answer is predation. Predation slows, stops, and even reverses economic progress. And the principal source of predation is governments.
Textbook economics has no explicit discussion of what government is or how it works. It simply assumes that government is a kind of benign spirit ready and willing to solve our problems — a kind of fairy godmother. The historical however, suggests otherwise.
Government is an organization created to deploy force, either to engage in predation (predatory government) or to protect a population against predation (associational government). In preindustrial Europe, the governments of kings and princes were predatory governments; think the Norman conquest of England. The governments of commercial cities were associational governments. In general, associational governments were hospitable to economic progress, while predatory governments were not.
Associational government, however, had a problem: it did not scale up very well. As the territory and population under an associational government grew, it became increasingly difficult for the population to exercise effective control over its government. This enabled the government to engage in predation: associational government turned into predatory government. Fortunately, a new form of associational government emerged, largely by chance, that solved this problem. When the provinces of the Netherlands won their war of independence from Spain, they created a national government that was an association of associational governments — a federal government. This was the model later adopted by the United States.
What did this different understanding of government mean for my progressivism? What government does is deploy force. In the good case, it deploys force to protect its territory against predation. In the bad case, to which things naturally tend, it deploys force to engage in predation. Government has existed for millennia; only a century or so ago did intellectuals — many of them economists — come up with the idea that government was a suitable instrument for solving society’s problems. It is a bizarre idea: why should the guys with the guns run the financial system or provide us with education or health care? The second pillar of my progressivism crumbled.
Conclusion
So, that is how I became
a libertarian. The first step was my personal experience of kibbutz,
where I came to realize that socialism, even on the scale of a small
community, did not further human happiness. The second step was
examining the history of government regulation of the U.S. financial
system. From that, I learned that contrary to the assumption of
progressivism, the government does not know — and cannot know — how to
make things better. Indeed, its interventions generally make things
worse. The third step was my research into the origins and nature of
government. Progressivism assumes that government is a suitable
instrument for improving society — a kind of fairy godmother. History
teaches otherwise: government evolved primarily as an instrument of
predation — more like a wicked witch.
Persuaded by this evidence, I became a libertarian — a libertarian with a small “l.” That is, I believe in limited government. Government is necessary to protect us against predation by other governments. But government is not a suitable instrument for other purposes, such as regulating economic activity, funding scientific research, or engaging in social engineering.
https://www.cato.org/policy-report/march/april-2021/how-i-became-libertarian